Nov 24 03:47:38 np0005533252 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 03:47:38 np0005533252 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 03:47:38 np0005533252 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 03:47:38 np0005533252 kernel: BIOS-provided physical RAM map:
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 03:47:38 np0005533252 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 03:47:38 np0005533252 kernel: NX (Execute Disable) protection: active
Nov 24 03:47:38 np0005533252 kernel: APIC: Static calls initialized
Nov 24 03:47:38 np0005533252 kernel: SMBIOS 2.8 present.
Nov 24 03:47:38 np0005533252 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 03:47:38 np0005533252 kernel: Hypervisor detected: KVM
Nov 24 03:47:38 np0005533252 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 03:47:38 np0005533252 kernel: kvm-clock: using sched offset of 6361643521 cycles
Nov 24 03:47:38 np0005533252 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 03:47:38 np0005533252 kernel: tsc: Detected 2799.998 MHz processor
Nov 24 03:47:38 np0005533252 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 03:47:38 np0005533252 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 03:47:38 np0005533252 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 03:47:38 np0005533252 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 03:47:38 np0005533252 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 03:47:38 np0005533252 kernel: Using GB pages for direct mapping
Nov 24 03:47:38 np0005533252 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 03:47:38 np0005533252 kernel: ACPI: Early table checksum verification disabled
Nov 24 03:47:38 np0005533252 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 03:47:38 np0005533252 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 03:47:38 np0005533252 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 03:47:38 np0005533252 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 03:47:38 np0005533252 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 03:47:38 np0005533252 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 03:47:38 np0005533252 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 03:47:38 np0005533252 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 03:47:38 np0005533252 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 03:47:38 np0005533252 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 03:47:38 np0005533252 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 03:47:38 np0005533252 kernel: No NUMA configuration found
Nov 24 03:47:38 np0005533252 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 03:47:38 np0005533252 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 24 03:47:38 np0005533252 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 03:47:38 np0005533252 kernel: Zone ranges:
Nov 24 03:47:38 np0005533252 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 03:47:38 np0005533252 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 03:47:38 np0005533252 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 03:47:38 np0005533252 kernel:  Device   empty
Nov 24 03:47:38 np0005533252 kernel: Movable zone start for each node
Nov 24 03:47:38 np0005533252 kernel: Early memory node ranges
Nov 24 03:47:38 np0005533252 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 03:47:38 np0005533252 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 03:47:38 np0005533252 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 03:47:38 np0005533252 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 03:47:38 np0005533252 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 03:47:38 np0005533252 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 03:47:38 np0005533252 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 03:47:38 np0005533252 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 03:47:38 np0005533252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 03:47:38 np0005533252 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 03:47:38 np0005533252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 03:47:38 np0005533252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 03:47:38 np0005533252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 03:47:38 np0005533252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 03:47:38 np0005533252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 03:47:38 np0005533252 kernel: TSC deadline timer available
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Max. logical packages:   8
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Max. logical dies:       8
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Max. dies per package:   1
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Max. threads per core:   1
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Num. cores per package:     1
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Num. threads per package:   1
Nov 24 03:47:38 np0005533252 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 03:47:38 np0005533252 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 03:47:38 np0005533252 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 03:47:38 np0005533252 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 03:47:38 np0005533252 kernel: Booting paravirtualized kernel on KVM
Nov 24 03:47:38 np0005533252 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 03:47:38 np0005533252 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 03:47:38 np0005533252 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 03:47:38 np0005533252 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 03:47:38 np0005533252 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 03:47:38 np0005533252 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 03:47:38 np0005533252 kernel: random: crng init done
Nov 24 03:47:38 np0005533252 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: Fallback order for Node 0: 0 
Nov 24 03:47:38 np0005533252 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 03:47:38 np0005533252 kernel: Policy zone: Normal
Nov 24 03:47:38 np0005533252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 03:47:38 np0005533252 kernel: software IO TLB: area num 8.
Nov 24 03:47:38 np0005533252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 03:47:38 np0005533252 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 03:47:38 np0005533252 kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 03:47:38 np0005533252 kernel: Dynamic Preempt: voluntary
Nov 24 03:47:38 np0005533252 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 03:47:38 np0005533252 kernel: rcu: #011RCU event tracing is enabled.
Nov 24 03:47:38 np0005533252 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 03:47:38 np0005533252 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 24 03:47:38 np0005533252 kernel: #011Rude variant of Tasks RCU enabled.
Nov 24 03:47:38 np0005533252 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 24 03:47:38 np0005533252 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 03:47:38 np0005533252 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 03:47:38 np0005533252 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 03:47:38 np0005533252 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 03:47:38 np0005533252 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 03:47:38 np0005533252 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 03:47:38 np0005533252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 03:47:38 np0005533252 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 03:47:38 np0005533252 kernel: Console: colour VGA+ 80x25
Nov 24 03:47:38 np0005533252 kernel: printk: console [ttyS0] enabled
Nov 24 03:47:38 np0005533252 kernel: ACPI: Core revision 20230331
Nov 24 03:47:38 np0005533252 kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 03:47:38 np0005533252 kernel: x2apic enabled
Nov 24 03:47:38 np0005533252 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 03:47:38 np0005533252 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 03:47:38 np0005533252 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 24 03:47:38 np0005533252 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 03:47:38 np0005533252 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 03:47:38 np0005533252 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 03:47:38 np0005533252 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 03:47:38 np0005533252 kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 03:47:38 np0005533252 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 03:47:38 np0005533252 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 03:47:38 np0005533252 kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 03:47:38 np0005533252 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 03:47:38 np0005533252 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 03:47:38 np0005533252 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 03:47:38 np0005533252 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 03:47:38 np0005533252 kernel: x86/bugs: return thunk changed
Nov 24 03:47:38 np0005533252 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 03:47:38 np0005533252 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 03:47:38 np0005533252 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 03:47:38 np0005533252 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 03:47:38 np0005533252 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 03:47:38 np0005533252 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 03:47:38 np0005533252 kernel: Freeing SMP alternatives memory: 40K
Nov 24 03:47:38 np0005533252 kernel: pid_max: default: 32768 minimum: 301
Nov 24 03:47:38 np0005533252 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 03:47:38 np0005533252 kernel: landlock: Up and running.
Nov 24 03:47:38 np0005533252 kernel: Yama: becoming mindful.
Nov 24 03:47:38 np0005533252 kernel: SELinux:  Initializing.
Nov 24 03:47:38 np0005533252 kernel: LSM support for eBPF active
Nov 24 03:47:38 np0005533252 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 03:47:38 np0005533252 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 03:47:38 np0005533252 kernel: ... version:                0
Nov 24 03:47:38 np0005533252 kernel: ... bit width:              48
Nov 24 03:47:38 np0005533252 kernel: ... generic registers:      6
Nov 24 03:47:38 np0005533252 kernel: ... value mask:             0000ffffffffffff
Nov 24 03:47:38 np0005533252 kernel: ... max period:             00007fffffffffff
Nov 24 03:47:38 np0005533252 kernel: ... fixed-purpose events:   0
Nov 24 03:47:38 np0005533252 kernel: ... event mask:             000000000000003f
Nov 24 03:47:38 np0005533252 kernel: signal: max sigframe size: 1776
Nov 24 03:47:38 np0005533252 kernel: rcu: Hierarchical SRCU implementation.
Nov 24 03:47:38 np0005533252 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 24 03:47:38 np0005533252 kernel: smp: Bringing up secondary CPUs ...
Nov 24 03:47:38 np0005533252 kernel: smpboot: x86: Booting SMP configuration:
Nov 24 03:47:38 np0005533252 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 03:47:38 np0005533252 kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 03:47:38 np0005533252 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 24 03:47:38 np0005533252 kernel: node 0 deferred pages initialised in 9ms
Nov 24 03:47:38 np0005533252 kernel: Memory: 7765704K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 24 03:47:38 np0005533252 kernel: devtmpfs: initialized
Nov 24 03:47:38 np0005533252 kernel: x86/mm: Memory block size: 128MB
Nov 24 03:47:38 np0005533252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 03:47:38 np0005533252 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 03:47:38 np0005533252 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 03:47:38 np0005533252 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 03:47:38 np0005533252 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 03:47:38 np0005533252 kernel: audit: initializing netlink subsys (disabled)
Nov 24 03:47:38 np0005533252 kernel: audit: type=2000 audit(1763974056.447:1): state=initialized audit_enabled=0 res=1
Nov 24 03:47:38 np0005533252 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 03:47:38 np0005533252 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 03:47:38 np0005533252 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 03:47:38 np0005533252 kernel: cpuidle: using governor menu
Nov 24 03:47:38 np0005533252 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 03:47:38 np0005533252 kernel: PCI: Using configuration type 1 for base access
Nov 24 03:47:38 np0005533252 kernel: PCI: Using configuration type 1 for extended access
Nov 24 03:47:38 np0005533252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 03:47:38 np0005533252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 03:47:38 np0005533252 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 03:47:38 np0005533252 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 03:47:38 np0005533252 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 03:47:38 np0005533252 kernel: Demotion targets for Node 0: null
Nov 24 03:47:38 np0005533252 kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 03:47:38 np0005533252 kernel: ACPI: Added _OSI(Module Device)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Added _OSI(Processor Device)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 03:47:38 np0005533252 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 03:47:38 np0005533252 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 03:47:38 np0005533252 kernel: ACPI: Interpreter enabled
Nov 24 03:47:38 np0005533252 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 03:47:38 np0005533252 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 03:47:38 np0005533252 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 03:47:38 np0005533252 kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 03:47:38 np0005533252 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 03:47:38 np0005533252 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [3] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [4] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [5] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [6] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [7] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [8] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [9] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [10] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [11] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [12] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [13] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [14] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [15] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [16] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [17] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [18] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [19] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [20] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [21] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [22] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [23] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [24] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [25] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [26] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [27] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [28] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [29] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [30] registered
Nov 24 03:47:38 np0005533252 kernel: acpiphp: Slot [31] registered
Nov 24 03:47:38 np0005533252 kernel: PCI host bridge to bus 0000:00
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 03:47:38 np0005533252 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 03:47:38 np0005533252 kernel: iommu: Default domain type: Translated
Nov 24 03:47:38 np0005533252 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 03:47:38 np0005533252 kernel: SCSI subsystem initialized
Nov 24 03:47:38 np0005533252 kernel: ACPI: bus type USB registered
Nov 24 03:47:38 np0005533252 kernel: usbcore: registered new interface driver usbfs
Nov 24 03:47:38 np0005533252 kernel: usbcore: registered new interface driver hub
Nov 24 03:47:38 np0005533252 kernel: usbcore: registered new device driver usb
Nov 24 03:47:38 np0005533252 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 03:47:38 np0005533252 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 03:47:38 np0005533252 kernel: PTP clock support registered
Nov 24 03:47:38 np0005533252 kernel: EDAC MC: Ver: 3.0.0
Nov 24 03:47:38 np0005533252 kernel: NetLabel: Initializing
Nov 24 03:47:38 np0005533252 kernel: NetLabel:  domain hash size = 128
Nov 24 03:47:38 np0005533252 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 03:47:38 np0005533252 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 03:47:38 np0005533252 kernel: PCI: Using ACPI for IRQ routing
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 03:47:38 np0005533252 kernel: vgaarb: loaded
Nov 24 03:47:38 np0005533252 kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 03:47:38 np0005533252 kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 03:47:38 np0005533252 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 03:47:38 np0005533252 kernel: pnp: PnP ACPI init
Nov 24 03:47:38 np0005533252 kernel: pnp: PnP ACPI: found 5 devices
Nov 24 03:47:38 np0005533252 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_INET protocol family
Nov 24 03:47:38 np0005533252 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 03:47:38 np0005533252 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_XDP protocol family
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 03:47:38 np0005533252 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 03:47:38 np0005533252 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 03:47:38 np0005533252 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74599 usecs
Nov 24 03:47:38 np0005533252 kernel: PCI: CLS 0 bytes, default 64
Nov 24 03:47:38 np0005533252 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 03:47:38 np0005533252 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 03:47:38 np0005533252 kernel: ACPI: bus type thunderbolt registered
Nov 24 03:47:38 np0005533252 kernel: Trying to unpack rootfs image as initramfs...
Nov 24 03:47:38 np0005533252 kernel: Initialise system trusted keyrings
Nov 24 03:47:38 np0005533252 kernel: Key type blacklist registered
Nov 24 03:47:38 np0005533252 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 03:47:38 np0005533252 kernel: zbud: loaded
Nov 24 03:47:38 np0005533252 kernel: integrity: Platform Keyring initialized
Nov 24 03:47:38 np0005533252 kernel: integrity: Machine keyring initialized
Nov 24 03:47:38 np0005533252 kernel: Freeing initrd memory: 85868K
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_ALG protocol family
Nov 24 03:47:38 np0005533252 kernel: xor: automatically using best checksumming function   avx       
Nov 24 03:47:38 np0005533252 kernel: Key type asymmetric registered
Nov 24 03:47:38 np0005533252 kernel: Asymmetric key parser 'x509' registered
Nov 24 03:47:38 np0005533252 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 03:47:38 np0005533252 kernel: io scheduler mq-deadline registered
Nov 24 03:47:38 np0005533252 kernel: io scheduler kyber registered
Nov 24 03:47:38 np0005533252 kernel: io scheduler bfq registered
Nov 24 03:47:38 np0005533252 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 03:47:38 np0005533252 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 03:47:38 np0005533252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 03:47:38 np0005533252 kernel: ACPI: button: Power Button [PWRF]
Nov 24 03:47:38 np0005533252 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 03:47:38 np0005533252 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 03:47:38 np0005533252 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 03:47:38 np0005533252 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 03:47:38 np0005533252 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 03:47:38 np0005533252 kernel: Non-volatile memory driver v1.3
Nov 24 03:47:38 np0005533252 kernel: rdac: device handler registered
Nov 24 03:47:38 np0005533252 kernel: hp_sw: device handler registered
Nov 24 03:47:38 np0005533252 kernel: emc: device handler registered
Nov 24 03:47:38 np0005533252 kernel: alua: device handler registered
Nov 24 03:47:38 np0005533252 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 03:47:38 np0005533252 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 03:47:38 np0005533252 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 03:47:38 np0005533252 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 03:47:38 np0005533252 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 03:47:38 np0005533252 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 03:47:38 np0005533252 kernel: usb usb1: Product: UHCI Host Controller
Nov 24 03:47:38 np0005533252 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 03:47:38 np0005533252 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 03:47:38 np0005533252 kernel: hub 1-0:1.0: USB hub found
Nov 24 03:47:38 np0005533252 kernel: hub 1-0:1.0: 2 ports detected
Nov 24 03:47:38 np0005533252 kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 03:47:38 np0005533252 kernel: usbserial: USB Serial support registered for generic
Nov 24 03:47:38 np0005533252 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 03:47:38 np0005533252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 03:47:38 np0005533252 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 03:47:38 np0005533252 kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 03:47:38 np0005533252 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 03:47:38 np0005533252 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 03:47:38 np0005533252 kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 03:47:38 np0005533252 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T08:47:37 UTC (1763974057)
Nov 24 03:47:38 np0005533252 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 03:47:38 np0005533252 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 03:47:38 np0005533252 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 03:47:38 np0005533252 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 03:47:38 np0005533252 kernel: usbcore: registered new interface driver usbhid
Nov 24 03:47:38 np0005533252 kernel: usbhid: USB HID core driver
Nov 24 03:47:38 np0005533252 kernel: drop_monitor: Initializing network drop monitor service
Nov 24 03:47:38 np0005533252 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 03:47:38 np0005533252 kernel: Initializing XFRM netlink socket
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_INET6 protocol family
Nov 24 03:47:38 np0005533252 kernel: Segment Routing with IPv6
Nov 24 03:47:38 np0005533252 kernel: NET: Registered PF_PACKET protocol family
Nov 24 03:47:38 np0005533252 kernel: mpls_gso: MPLS GSO support
Nov 24 03:47:38 np0005533252 kernel: IPI shorthand broadcast: enabled
Nov 24 03:47:38 np0005533252 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 03:47:38 np0005533252 kernel: AES CTR mode by8 optimization enabled
Nov 24 03:47:38 np0005533252 kernel: sched_clock: Marking stable (1159002040, 149605137)->(1418836053, -110228876)
Nov 24 03:47:38 np0005533252 kernel: registered taskstats version 1
Nov 24 03:47:38 np0005533252 kernel: Loading compiled-in X.509 certificates
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 03:47:38 np0005533252 kernel: Demotion targets for Node 0: null
Nov 24 03:47:38 np0005533252 kernel: page_owner is disabled
Nov 24 03:47:38 np0005533252 kernel: Key type .fscrypt registered
Nov 24 03:47:38 np0005533252 kernel: Key type fscrypt-provisioning registered
Nov 24 03:47:38 np0005533252 kernel: Key type big_key registered
Nov 24 03:47:38 np0005533252 kernel: Key type encrypted registered
Nov 24 03:47:38 np0005533252 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 03:47:38 np0005533252 kernel: Loading compiled-in module X.509 certificates
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 03:47:38 np0005533252 kernel: ima: Allocated hash algorithm: sha256
Nov 24 03:47:38 np0005533252 kernel: ima: No architecture policies found
Nov 24 03:47:38 np0005533252 kernel: evm: Initialising EVM extended attributes:
Nov 24 03:47:38 np0005533252 kernel: evm: security.selinux
Nov 24 03:47:38 np0005533252 kernel: evm: security.SMACK64 (disabled)
Nov 24 03:47:38 np0005533252 kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 03:47:38 np0005533252 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 03:47:38 np0005533252 kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 03:47:38 np0005533252 kernel: evm: security.apparmor (disabled)
Nov 24 03:47:38 np0005533252 kernel: evm: security.ima
Nov 24 03:47:38 np0005533252 kernel: evm: security.capability
Nov 24 03:47:38 np0005533252 kernel: evm: HMAC attrs: 0x1
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 03:47:38 np0005533252 kernel: Running certificate verification RSA selftest
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 03:47:38 np0005533252 kernel: Running certificate verification ECDSA selftest
Nov 24 03:47:38 np0005533252 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 03:47:38 np0005533252 kernel: clk: Disabling unused clocks
Nov 24 03:47:38 np0005533252 kernel: Freeing unused decrypted memory: 2028K
Nov 24 03:47:38 np0005533252 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 03:47:38 np0005533252 kernel: Write protecting the kernel read-only data: 30720k
Nov 24 03:47:38 np0005533252 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 03:47:38 np0005533252 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 03:47:38 np0005533252 kernel: Run /init as init process
Nov 24 03:47:38 np0005533252 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 03:47:38 np0005533252 systemd: Detected virtualization kvm.
Nov 24 03:47:38 np0005533252 systemd: Detected architecture x86-64.
Nov 24 03:47:38 np0005533252 systemd: Running in initrd.
Nov 24 03:47:38 np0005533252 systemd: No hostname configured, using default hostname.
Nov 24 03:47:38 np0005533252 systemd: Hostname set to <localhost>.
Nov 24 03:47:38 np0005533252 systemd: Initializing machine ID from VM UUID.
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: Manufacturer: QEMU
Nov 24 03:47:38 np0005533252 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 03:47:38 np0005533252 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 03:47:38 np0005533252 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 03:47:38 np0005533252 systemd: Queued start job for default target Initrd Default Target.
Nov 24 03:47:38 np0005533252 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 03:47:38 np0005533252 systemd: Reached target Local Encrypted Volumes.
Nov 24 03:47:38 np0005533252 systemd: Reached target Initrd /usr File System.
Nov 24 03:47:38 np0005533252 systemd: Reached target Local File Systems.
Nov 24 03:47:38 np0005533252 systemd: Reached target Path Units.
Nov 24 03:47:38 np0005533252 systemd: Reached target Slice Units.
Nov 24 03:47:38 np0005533252 systemd: Reached target Swaps.
Nov 24 03:47:38 np0005533252 systemd: Reached target Timer Units.
Nov 24 03:47:38 np0005533252 systemd: Listening on D-Bus System Message Bus Socket.
Nov 24 03:47:38 np0005533252 systemd: Listening on Journal Socket (/dev/log).
Nov 24 03:47:38 np0005533252 systemd: Listening on Journal Socket.
Nov 24 03:47:38 np0005533252 systemd: Listening on udev Control Socket.
Nov 24 03:47:38 np0005533252 systemd: Listening on udev Kernel Socket.
Nov 24 03:47:38 np0005533252 systemd: Reached target Socket Units.
Nov 24 03:47:38 np0005533252 systemd: Starting Create List of Static Device Nodes...
Nov 24 03:47:38 np0005533252 systemd: Starting Journal Service...
Nov 24 03:47:38 np0005533252 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 03:47:38 np0005533252 systemd: Starting Apply Kernel Variables...
Nov 24 03:47:38 np0005533252 systemd: Starting Create System Users...
Nov 24 03:47:38 np0005533252 systemd: Starting Setup Virtual Console...
Nov 24 03:47:38 np0005533252 systemd: Finished Create List of Static Device Nodes.
Nov 24 03:47:38 np0005533252 systemd: Finished Apply Kernel Variables.
Nov 24 03:47:38 np0005533252 systemd: Finished Create System Users.
Nov 24 03:47:38 np0005533252 systemd: Starting Create Static Device Nodes in /dev...
Nov 24 03:47:38 np0005533252 systemd-journald[307]: Journal started
Nov 24 03:47:38 np0005533252 systemd-journald[307]: Runtime Journal (/run/log/journal/719139db46ba4050a77b5fa732a73807) is 8.0M, max 153.6M, 145.6M free.
Nov 24 03:47:38 np0005533252 systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 24 03:47:38 np0005533252 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 24 03:47:38 np0005533252 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 03:47:38 np0005533252 systemd: Started Journal Service.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 03:47:38 np0005533252 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 03:47:38 np0005533252 systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 03:47:38 np0005533252 systemd[1]: Finished Setup Virtual Console.
Nov 24 03:47:38 np0005533252 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting dracut cmdline hook...
Nov 24 03:47:38 np0005533252 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 03:47:38 np0005533252 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 03:47:38 np0005533252 systemd[1]: Finished dracut cmdline hook.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting dracut pre-udev hook...
Nov 24 03:47:38 np0005533252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 03:47:38 np0005533252 kernel: device-mapper: uevent: version 1.0.3
Nov 24 03:47:38 np0005533252 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 03:47:38 np0005533252 kernel: RPC: Registered named UNIX socket transport module.
Nov 24 03:47:38 np0005533252 kernel: RPC: Registered udp transport module.
Nov 24 03:47:38 np0005533252 kernel: RPC: Registered tcp transport module.
Nov 24 03:47:38 np0005533252 kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 03:47:38 np0005533252 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 03:47:38 np0005533252 rpc.statd[441]: Version 2.5.4 starting
Nov 24 03:47:38 np0005533252 rpc.statd[441]: Initializing NSM state
Nov 24 03:47:38 np0005533252 rpc.idmapd[446]: Setting log level to 0
Nov 24 03:47:38 np0005533252 systemd[1]: Finished dracut pre-udev hook.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 03:47:38 np0005533252 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 03:47:38 np0005533252 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting dracut pre-trigger hook...
Nov 24 03:47:38 np0005533252 systemd[1]: Finished dracut pre-trigger hook.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting Coldplug All udev Devices...
Nov 24 03:47:38 np0005533252 systemd[1]: Created slice Slice /system/modprobe.
Nov 24 03:47:38 np0005533252 systemd[1]: Starting Load Kernel Module configfs...
Nov 24 03:47:38 np0005533252 systemd[1]: Finished Coldplug All udev Devices.
Nov 24 03:47:38 np0005533252 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 03:47:38 np0005533252 systemd[1]: Finished Load Kernel Module configfs.
Nov 24 03:47:38 np0005533252 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 03:47:38 np0005533252 systemd[1]: Reached target Network.
Nov 24 03:47:38 np0005533252 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 03:47:38 np0005533252 systemd[1]: Starting dracut initqueue hook...
Nov 24 03:47:38 np0005533252 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 03:47:38 np0005533252 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 03:47:38 np0005533252 kernel: vda: vda1
Nov 24 03:47:38 np0005533252 kernel: scsi host0: ata_piix
Nov 24 03:47:38 np0005533252 kernel: scsi host1: ata_piix
Nov 24 03:47:38 np0005533252 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 03:47:38 np0005533252 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 03:47:38 np0005533252 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 03:47:38 np0005533252 systemd[1]: Reached target Initrd Root Device.
Nov 24 03:47:39 np0005533252 kernel: ata1: found unknown device (class 0)
Nov 24 03:47:39 np0005533252 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 03:47:39 np0005533252 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 03:47:39 np0005533252 systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 03:47:39 np0005533252 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 03:47:39 np0005533252 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 03:47:39 np0005533252 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 03:47:39 np0005533252 systemd[1]: Mounting Kernel Configuration File System...
Nov 24 03:47:39 np0005533252 systemd[1]: Mounted Kernel Configuration File System.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target System Initialization.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Basic System.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished dracut initqueue hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Remote File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting dracut pre-mount hook...
Nov 24 03:47:39 np0005533252 systemd[1]: Finished dracut pre-mount hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 03:47:39 np0005533252 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 03:47:39 np0005533252 systemd[1]: Mounting /sysroot...
Nov 24 03:47:39 np0005533252 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 03:47:39 np0005533252 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 03:47:39 np0005533252 kernel: XFS (vda1): Ending clean mount
Nov 24 03:47:39 np0005533252 systemd[1]: Mounted /sysroot.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Initrd Root File System.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 03:47:39 np0005533252 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Initrd File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Initrd Default Target.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting dracut mount hook...
Nov 24 03:47:39 np0005533252 systemd[1]: Finished dracut mount hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 03:47:39 np0005533252 rpc.idmapd[446]: exiting on signal 15
Nov 24 03:47:39 np0005533252 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Network.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Timer Units.
Nov 24 03:47:39 np0005533252 systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Initrd Default Target.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Basic System.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Initrd Root Device.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Initrd /usr File System.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Path Units.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Remote File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Slice Units.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Socket Units.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target System Initialization.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Local File Systems.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Swaps.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut mount hook.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut pre-mount hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut initqueue hook.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Setup Virtual Console.
Nov 24 03:47:39 np0005533252 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Closed udev Control Socket.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Closed udev Kernel Socket.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut pre-udev hook.
Nov 24 03:47:39 np0005533252 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped dracut cmdline hook.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting Cleanup udev Database...
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 03:47:39 np0005533252 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 03:47:39 np0005533252 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Stopped Create System Users.
Nov 24 03:47:39 np0005533252 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 03:47:39 np0005533252 systemd[1]: Finished Cleanup udev Database.
Nov 24 03:47:39 np0005533252 systemd[1]: Reached target Switch Root.
Nov 24 03:47:39 np0005533252 systemd[1]: Starting Switch Root...
Nov 24 03:47:40 np0005533252 systemd[1]: Switching root.
Nov 24 03:47:40 np0005533252 systemd-journald[307]: Journal stopped
Nov 24 03:47:40 np0005533252 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 24 03:47:40 np0005533252 kernel: audit: type=1404 audit(1763974060.191:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 03:47:40 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 03:47:40 np0005533252 kernel: audit: type=1403 audit(1763974060.376:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 03:47:40 np0005533252 systemd: Successfully loaded SELinux policy in 190.680ms.
Nov 24 03:47:40 np0005533252 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.903ms.
Nov 24 03:47:40 np0005533252 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 03:47:40 np0005533252 systemd: Detected virtualization kvm.
Nov 24 03:47:40 np0005533252 systemd: Detected architecture x86-64.
Nov 24 03:47:40 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 03:47:40 np0005533252 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd: Stopped Switch Root.
Nov 24 03:47:40 np0005533252 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 03:47:40 np0005533252 systemd: Created slice Slice /system/getty.
Nov 24 03:47:40 np0005533252 systemd: Created slice Slice /system/serial-getty.
Nov 24 03:47:40 np0005533252 systemd: Created slice Slice /system/sshd-keygen.
Nov 24 03:47:40 np0005533252 systemd: Created slice User and Session Slice.
Nov 24 03:47:40 np0005533252 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 03:47:40 np0005533252 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 24 03:47:40 np0005533252 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 03:47:40 np0005533252 systemd: Reached target Local Encrypted Volumes.
Nov 24 03:47:40 np0005533252 systemd: Stopped target Switch Root.
Nov 24 03:47:40 np0005533252 systemd: Stopped target Initrd File Systems.
Nov 24 03:47:40 np0005533252 systemd: Stopped target Initrd Root File System.
Nov 24 03:47:40 np0005533252 systemd: Reached target Local Integrity Protected Volumes.
Nov 24 03:47:40 np0005533252 systemd: Reached target Path Units.
Nov 24 03:47:40 np0005533252 systemd: Reached target rpc_pipefs.target.
Nov 24 03:47:40 np0005533252 systemd: Reached target Slice Units.
Nov 24 03:47:40 np0005533252 systemd: Reached target Swaps.
Nov 24 03:47:40 np0005533252 systemd: Reached target Local Verity Protected Volumes.
Nov 24 03:47:40 np0005533252 systemd: Listening on RPCbind Server Activation Socket.
Nov 24 03:47:40 np0005533252 systemd: Reached target RPC Port Mapper.
Nov 24 03:47:40 np0005533252 systemd: Listening on Process Core Dump Socket.
Nov 24 03:47:40 np0005533252 systemd: Listening on initctl Compatibility Named Pipe.
Nov 24 03:47:40 np0005533252 systemd: Listening on udev Control Socket.
Nov 24 03:47:40 np0005533252 systemd: Listening on udev Kernel Socket.
Nov 24 03:47:40 np0005533252 systemd: Mounting Huge Pages File System...
Nov 24 03:47:40 np0005533252 systemd: Mounting POSIX Message Queue File System...
Nov 24 03:47:40 np0005533252 systemd: Mounting Kernel Debug File System...
Nov 24 03:47:40 np0005533252 systemd: Mounting Kernel Trace File System...
Nov 24 03:47:40 np0005533252 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 03:47:40 np0005533252 systemd: Starting Create List of Static Device Nodes...
Nov 24 03:47:40 np0005533252 systemd: Starting Load Kernel Module configfs...
Nov 24 03:47:40 np0005533252 systemd: Starting Load Kernel Module drm...
Nov 24 03:47:40 np0005533252 systemd: Starting Load Kernel Module efi_pstore...
Nov 24 03:47:40 np0005533252 systemd: Starting Load Kernel Module fuse...
Nov 24 03:47:40 np0005533252 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 03:47:40 np0005533252 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd: Stopped File System Check on Root Device.
Nov 24 03:47:40 np0005533252 systemd: Stopped Journal Service.
Nov 24 03:47:40 np0005533252 systemd: Starting Journal Service...
Nov 24 03:47:40 np0005533252 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 03:47:40 np0005533252 systemd: Starting Generate network units from Kernel command line...
Nov 24 03:47:40 np0005533252 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 03:47:40 np0005533252 systemd: Starting Remount Root and Kernel File Systems...
Nov 24 03:47:40 np0005533252 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 03:47:40 np0005533252 systemd: Starting Apply Kernel Variables...
Nov 24 03:47:40 np0005533252 systemd: Starting Coldplug All udev Devices...
Nov 24 03:47:40 np0005533252 kernel: fuse: init (API version 7.37)
Nov 24 03:47:40 np0005533252 systemd: Mounted Huge Pages File System.
Nov 24 03:47:40 np0005533252 systemd: Mounted POSIX Message Queue File System.
Nov 24 03:47:40 np0005533252 systemd-journald[679]: Journal started
Nov 24 03:47:40 np0005533252 systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 03:47:40 np0005533252 systemd[1]: Queued start job for default target Multi-User System.
Nov 24 03:47:40 np0005533252 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd: Started Journal Service.
Nov 24 03:47:40 np0005533252 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 03:47:40 np0005533252 systemd[1]: Mounted Kernel Debug File System.
Nov 24 03:47:40 np0005533252 systemd[1]: Mounted Kernel Trace File System.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 03:47:40 np0005533252 kernel: ACPI: bus type drm_connector registered
Nov 24 03:47:40 np0005533252 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Load Kernel Module configfs.
Nov 24 03:47:40 np0005533252 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Load Kernel Module drm.
Nov 24 03:47:40 np0005533252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 03:47:40 np0005533252 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Load Kernel Module fuse.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 03:47:40 np0005533252 systemd[1]: Finished Apply Kernel Variables.
Nov 24 03:47:41 np0005533252 systemd[1]: Mounting FUSE Control File System...
Nov 24 03:47:41 np0005533252 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Rebuild Hardware Database...
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 03:47:41 np0005533252 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Create System Users...
Nov 24 03:47:41 np0005533252 systemd[1]: Mounted FUSE Control File System.
Nov 24 03:47:41 np0005533252 systemd-journald[679]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 03:47:41 np0005533252 systemd-journald[679]: Received client request to flush runtime journal.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 03:47:41 np0005533252 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Coldplug All udev Devices.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Create System Users.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 03:47:41 np0005533252 systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 03:47:41 np0005533252 systemd[1]: Reached target Local File Systems.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 03:47:41 np0005533252 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 03:47:41 np0005533252 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 03:47:41 np0005533252 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 03:47:41 np0005533252 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 03:47:41 np0005533252 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Security Auditing Service...
Nov 24 03:47:41 np0005533252 systemd[1]: Starting RPC Bind...
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 03:47:41 np0005533252 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 03:47:41 np0005533252 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 03:47:41 np0005533252 systemd[1]: Started RPC Bind.
Nov 24 03:47:41 np0005533252 augenrules[708]: /sbin/augenrules: No change
Nov 24 03:47:41 np0005533252 augenrules[723]: No rules
Nov 24 03:47:41 np0005533252 augenrules[723]: enabled 1
Nov 24 03:47:41 np0005533252 augenrules[723]: failure 1
Nov 24 03:47:41 np0005533252 augenrules[723]: pid 703
Nov 24 03:47:41 np0005533252 augenrules[723]: rate_limit 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_limit 8192
Nov 24 03:47:41 np0005533252 augenrules[723]: lost 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time 60000
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time_actual 0
Nov 24 03:47:41 np0005533252 augenrules[723]: enabled 1
Nov 24 03:47:41 np0005533252 augenrules[723]: failure 1
Nov 24 03:47:41 np0005533252 augenrules[723]: pid 703
Nov 24 03:47:41 np0005533252 augenrules[723]: rate_limit 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_limit 8192
Nov 24 03:47:41 np0005533252 augenrules[723]: lost 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time 60000
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time_actual 0
Nov 24 03:47:41 np0005533252 augenrules[723]: enabled 1
Nov 24 03:47:41 np0005533252 augenrules[723]: failure 1
Nov 24 03:47:41 np0005533252 augenrules[723]: pid 703
Nov 24 03:47:41 np0005533252 augenrules[723]: rate_limit 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_limit 8192
Nov 24 03:47:41 np0005533252 augenrules[723]: lost 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog 0
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time 60000
Nov 24 03:47:41 np0005533252 augenrules[723]: backlog_wait_time_actual 0
Nov 24 03:47:41 np0005533252 systemd[1]: Started Security Auditing Service.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Rebuild Hardware Database.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 03:47:41 np0005533252 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 03:47:41 np0005533252 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Load Kernel Module configfs...
Nov 24 03:47:41 np0005533252 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Load Kernel Module configfs.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting Update is Completed...
Nov 24 03:47:41 np0005533252 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 03:47:41 np0005533252 systemd[1]: Finished Update is Completed.
Nov 24 03:47:41 np0005533252 systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 03:47:41 np0005533252 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 03:47:41 np0005533252 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 03:47:41 np0005533252 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 03:47:41 np0005533252 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 03:47:41 np0005533252 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 03:47:41 np0005533252 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 03:47:41 np0005533252 kernel: Console: switching to colour dummy device 80x25
Nov 24 03:47:41 np0005533252 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 03:47:41 np0005533252 kernel: [drm] features: -context_init
Nov 24 03:47:41 np0005533252 kernel: [drm] number of scanouts: 1
Nov 24 03:47:41 np0005533252 kernel: [drm] number of cap sets: 0
Nov 24 03:47:41 np0005533252 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 03:47:41 np0005533252 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 03:47:41 np0005533252 kernel: Console: switching to colour frame buffer device 128x48
Nov 24 03:47:41 np0005533252 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 03:47:41 np0005533252 kernel: kvm_amd: TSC scaling supported
Nov 24 03:47:41 np0005533252 kernel: kvm_amd: Nested Virtualization enabled
Nov 24 03:47:41 np0005533252 kernel: kvm_amd: Nested Paging enabled
Nov 24 03:47:41 np0005533252 kernel: kvm_amd: LBR virtualization supported
Nov 24 03:47:41 np0005533252 systemd[1]: Reached target System Initialization.
Nov 24 03:47:41 np0005533252 systemd[1]: Started dnf makecache --timer.
Nov 24 03:47:41 np0005533252 systemd[1]: Started Daily rotation of log files.
Nov 24 03:47:41 np0005533252 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 03:47:41 np0005533252 systemd[1]: Reached target Timer Units.
Nov 24 03:47:41 np0005533252 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 03:47:41 np0005533252 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 03:47:41 np0005533252 systemd[1]: Reached target Socket Units.
Nov 24 03:47:41 np0005533252 systemd[1]: Starting D-Bus System Message Bus...
Nov 24 03:47:42 np0005533252 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 03:47:42 np0005533252 systemd[1]: Started D-Bus System Message Bus.
Nov 24 03:47:42 np0005533252 dbus-broker-lau[791]: Ready
Nov 24 03:47:42 np0005533252 systemd[1]: Reached target Basic System.
Nov 24 03:47:42 np0005533252 systemd[1]: Starting NTP client/server...
Nov 24 03:47:42 np0005533252 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 03:47:42 np0005533252 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 03:47:42 np0005533252 systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 03:47:42 np0005533252 systemd[1]: Started irqbalance daemon.
Nov 24 03:47:42 np0005533252 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 03:47:42 np0005533252 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 03:47:42 np0005533252 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 03:47:42 np0005533252 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 03:47:42 np0005533252 systemd[1]: Reached target sshd-keygen.target.
Nov 24 03:47:42 np0005533252 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 03:47:42 np0005533252 systemd[1]: Reached target User and Group Name Lookups.
Nov 24 03:47:42 np0005533252 systemd[1]: Starting User Login Management...
Nov 24 03:47:42 np0005533252 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 03:47:42 np0005533252 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 03:47:42 np0005533252 chronyd[831]: Loaded 0 symmetric keys
Nov 24 03:47:42 np0005533252 chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 24 03:47:42 np0005533252 chronyd[831]: Loaded seccomp filter (level 2)
Nov 24 03:47:42 np0005533252 systemd[1]: Started NTP client/server.
Nov 24 03:47:42 np0005533252 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 03:47:42 np0005533252 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 03:47:42 np0005533252 systemd-logind[823]: New seat seat0.
Nov 24 03:47:42 np0005533252 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 03:47:42 np0005533252 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 03:47:42 np0005533252 systemd[1]: Started User Login Management.
Nov 24 03:47:42 np0005533252 iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 24 03:47:42 np0005533252 systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 03:47:42 np0005533252 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 08:47:42 +0000. Up 6.36 seconds.
Nov 24 03:47:42 np0005533252 systemd[1]: run-cloud\x2dinit-tmp-tmpb89p5b3b.mount: Deactivated successfully.
Nov 24 03:47:43 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 03:47:43 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 03:47:43 np0005533252 systemd-hostnamed[854]: Hostname set to <np0005533252.novalocal> (static)
Nov 24 03:47:43 np0005533252 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 03:47:43 np0005533252 systemd[1]: Reached target Preparation for Network.
Nov 24 03:47:43 np0005533252 systemd[1]: Starting Network Manager...
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2673] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2677] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2808] manager[0x55c85cd4c080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2846] hostname: hostname: using hostnamed
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2847] hostname: static hostname changed from (none) to "np0005533252.novalocal"
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2850] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2950] manager[0x55c85cd4c080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.2950] manager[0x55c85cd4c080]: rfkill: WWAN hardware radio set enabled
Nov 24 03:47:43 np0005533252 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3031] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3031] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3032] manager: Networking is enabled by state file
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3033] settings: Loaded settings plugin: keyfile (internal)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3067] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3087] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3112] dhcp: init: Using DHCP client 'internal'
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3115] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3127] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3138] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3145] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 03:47:43 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3152] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3155] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 03:47:43 np0005533252 systemd[1]: Started Network Manager.
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3180] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3185] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3186] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3187] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3188] device (eth0): carrier: link connected
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3191] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 systemd[1]: Reached target Network.
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3197] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3205] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3208] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3209] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3211] manager: NetworkManager state is now CONNECTING
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3211] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3217] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3219] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:47:43 np0005533252 systemd[1]: Starting Network Manager Wait Online...
Nov 24 03:47:43 np0005533252 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 03:47:43 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3331] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3334] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 03:47:43 np0005533252 NetworkManager[858]: <info>  [1763974063.3339] device (lo): Activation: successful, device activated.
Nov 24 03:47:43 np0005533252 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 03:47:43 np0005533252 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 03:47:43 np0005533252 systemd[1]: Reached target NFS client services.
Nov 24 03:47:43 np0005533252 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 03:47:43 np0005533252 systemd[1]: Reached target Remote File Systems.
Nov 24 03:47:43 np0005533252 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3089] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3100] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3131] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3160] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3162] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3166] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3176] device (eth0): Activation: successful, device activated.
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3182] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 03:47:45 np0005533252 NetworkManager[858]: <info>  [1763974065.3184] manager: startup complete
Nov 24 03:47:45 np0005533252 systemd[1]: Finished Network Manager Wait Online.
Nov 24 03:47:45 np0005533252 systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 03:47:45 np0005533252 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 08:47:45 +0000. Up 9.20 seconds.
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |  eth0  | True |        38.129.56.228         | 255.255.255.0 | global | fa:16:3e:c1:ba:0c |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fec1:ba0c/64 |       .       |  link  | fa:16:3e:c1:ba:0c |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 03:47:45 np0005533252 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 03:47:46 np0005533252 cloud-init[921]: Generating public/private rsa key pair.
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key fingerprint is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: SHA256:zpyeWu5GMpvaw/fpqkzvhP9uZpI0QYDmP63axQQnNXw root@np0005533252.novalocal
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key's randomart image is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: +---[RSA 3072]----+
Nov 24 03:47:46 np0005533252 cloud-init[921]: |     ..oo        |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |    o  .o.E      |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |   o  o...       |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |    .  +.        |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |     . .S.       |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |      =B=.       |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |     .oXXo       |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |     =*O=.+.     |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |    oo*OX@*      |
Nov 24 03:47:46 np0005533252 cloud-init[921]: +----[SHA256]-----+
Nov 24 03:47:46 np0005533252 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key fingerprint is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: SHA256:ZR0jNIt3UA2D6cs/W7BzPpLQPrPPUSlek7X4asUlSkQ root@np0005533252.novalocal
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key's randomart image is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: +---[ECDSA 256]---+
Nov 24 03:47:46 np0005533252 cloud-init[921]: |         .==Eo   |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |         .o*.+.  |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |        ..=.o   .|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |         +... o *|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |        S. +.+.Bo|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |          + +o++.|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |           +oo=. |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |            OBo. |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |            +X+. |
Nov 24 03:47:46 np0005533252 cloud-init[921]: +----[SHA256]-----+
Nov 24 03:47:46 np0005533252 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 03:47:46 np0005533252 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key fingerprint is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: SHA256:s86N8E5lA+Hy3QoUOilffzXzSRUjOJBPKV3J7GrT9k0 root@np0005533252.novalocal
Nov 24 03:47:46 np0005533252 cloud-init[921]: The key's randomart image is:
Nov 24 03:47:46 np0005533252 cloud-init[921]: +--[ED25519 256]--+
Nov 24 03:47:46 np0005533252 cloud-init[921]: |        o.+ *o.oo|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |       + = * +. o|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |    . = = + o +. |
Nov 24 03:47:46 np0005533252 cloud-init[921]: |     o * + o o.+.|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |      . S * =  ..|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |         * B o  E|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |      . o o o ...|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |       * o     ..|
Nov 24 03:47:46 np0005533252 cloud-init[921]: |       .* .      |
Nov 24 03:47:46 np0005533252 cloud-init[921]: +----[SHA256]-----+
Nov 24 03:47:46 np0005533252 systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 03:47:46 np0005533252 systemd[1]: Reached target Cloud-config availability.
Nov 24 03:47:46 np0005533252 systemd[1]: Reached target Network is Online.
Nov 24 03:47:46 np0005533252 systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 03:47:46 np0005533252 systemd[1]: Starting Crash recovery kernel arming...
Nov 24 03:47:46 np0005533252 systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 03:47:46 np0005533252 systemd[1]: Starting System Logging Service...
Nov 24 03:47:46 np0005533252 sm-notify[1004]: Version 2.5.4 starting
Nov 24 03:47:46 np0005533252 systemd[1]: Starting OpenSSH server daemon...
Nov 24 03:47:46 np0005533252 systemd[1]: Starting Permit User Sessions...
Nov 24 03:47:46 np0005533252 systemd[1]: Started Notify NFS peers of a restart.
Nov 24 03:47:46 np0005533252 systemd[1]: Started OpenSSH server daemon.
Nov 24 03:47:46 np0005533252 systemd[1]: Finished Permit User Sessions.
Nov 24 03:47:46 np0005533252 systemd[1]: Started Command Scheduler.
Nov 24 03:47:46 np0005533252 systemd[1]: Started Getty on tty1.
Nov 24 03:47:46 np0005533252 systemd[1]: Started Serial Getty on ttyS0.
Nov 24 03:47:46 np0005533252 systemd[1]: Reached target Login Prompts.
Nov 24 03:47:47 np0005533252 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 24 03:47:47 np0005533252 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 03:47:47 np0005533252 systemd[1]: Started System Logging Service.
Nov 24 03:47:47 np0005533252 systemd[1]: Reached target Multi-User System.
Nov 24 03:47:47 np0005533252 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 03:47:47 np0005533252 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 03:47:47 np0005533252 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 03:47:47 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 03:47:47 np0005533252 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 24 03:47:47 np0005533252 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 03:47:47 np0005533252 cloud-init[1127]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 08:47:47 +0000. Up 10.83 seconds.
Nov 24 03:47:47 np0005533252 systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 03:47:47 np0005533252 systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 03:47:47 np0005533252 dracut[1283]: dracut-057-102.git20250818.el9
Nov 24 03:47:47 np0005533252 cloud-init[1301]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 08:47:47 +0000. Up 11.23 seconds.
Nov 24 03:47:47 np0005533252 cloud-init[1312]: #############################################################
Nov 24 03:47:47 np0005533252 cloud-init[1314]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 03:47:47 np0005533252 dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 03:47:47 np0005533252 cloud-init[1322]: 256 SHA256:ZR0jNIt3UA2D6cs/W7BzPpLQPrPPUSlek7X4asUlSkQ root@np0005533252.novalocal (ECDSA)
Nov 24 03:47:47 np0005533252 cloud-init[1327]: 256 SHA256:s86N8E5lA+Hy3QoUOilffzXzSRUjOJBPKV3J7GrT9k0 root@np0005533252.novalocal (ED25519)
Nov 24 03:47:47 np0005533252 cloud-init[1332]: 3072 SHA256:zpyeWu5GMpvaw/fpqkzvhP9uZpI0QYDmP63axQQnNXw root@np0005533252.novalocal (RSA)
Nov 24 03:47:47 np0005533252 cloud-init[1334]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 03:47:47 np0005533252 cloud-init[1336]: #############################################################
Nov 24 03:47:47 np0005533252 cloud-init[1301]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 08:47:47 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.39 seconds
Nov 24 03:47:47 np0005533252 systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 03:47:47 np0005533252 systemd[1]: Reached target Cloud-init target.
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: memstrack is not available
Nov 24 03:47:48 np0005533252 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 03:47:48 np0005533252 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 03:47:49 np0005533252 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 03:47:49 np0005533252 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 03:47:49 np0005533252 dracut[1285]: memstrack is not available
Nov 24 03:47:49 np0005533252 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 03:47:49 np0005533252 dracut[1285]: *** Including module: systemd ***
Nov 24 03:47:49 np0005533252 dracut[1285]: *** Including module: fips ***
Nov 24 03:47:49 np0005533252 dracut[1285]: *** Including module: systemd-initrd ***
Nov 24 03:47:49 np0005533252 dracut[1285]: *** Including module: i18n ***
Nov 24 03:47:49 np0005533252 dracut[1285]: *** Including module: drm ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: prefixdevname ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: kernel-modules ***
Nov 24 03:47:50 np0005533252 kernel: block vda: the capability attribute has been deprecated.
Nov 24 03:47:50 np0005533252 chronyd[831]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 03:47:50 np0005533252 chronyd[831]: System clock TAI offset set to 37 seconds
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: kernel-modules-extra ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: qemu ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: fstab-sys ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: rootfs-block ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: terminfo ***
Nov 24 03:47:50 np0005533252 dracut[1285]: *** Including module: udev-rules ***
Nov 24 03:47:51 np0005533252 dracut[1285]: Skipping udev rule: 91-permissions.rules
Nov 24 03:47:51 np0005533252 dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: virtiofs ***
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: dracut-systemd ***
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: usrmount ***
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: base ***
Nov 24 03:47:51 np0005533252 chronyd[831]: Selected source 23.133.168.246 (2.centos.pool.ntp.org)
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: fs-lib ***
Nov 24 03:47:51 np0005533252 dracut[1285]: *** Including module: kdumpbase ***
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 03:47:52 np0005533252 dracut[1285]:  microcode_ctl module: mangling fw_dir
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 03:47:52 np0005533252 irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 03:47:52 np0005533252 dracut[1285]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Including module: openssl ***
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Including module: shutdown ***
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Including module: squash ***
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Including modules done ***
Nov 24 03:47:52 np0005533252 dracut[1285]: *** Installing kernel module dependencies ***
Nov 24 03:47:53 np0005533252 dracut[1285]: *** Installing kernel module dependencies done ***
Nov 24 03:47:53 np0005533252 dracut[1285]: *** Resolving executable dependencies ***
Nov 24 03:47:54 np0005533252 dracut[1285]: *** Resolving executable dependencies done ***
Nov 24 03:47:54 np0005533252 dracut[1285]: *** Generating early-microcode cpio image ***
Nov 24 03:47:54 np0005533252 dracut[1285]: *** Store current command line parameters ***
Nov 24 03:47:54 np0005533252 dracut[1285]: Stored kernel commandline:
Nov 24 03:47:54 np0005533252 dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Nov 24 03:47:54 np0005533252 dracut[1285]: *** Install squash loader ***
Nov 24 03:47:55 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 03:47:55 np0005533252 dracut[1285]: *** Squashing the files inside the initramfs ***
Nov 24 03:47:56 np0005533252 dracut[1285]: *** Squashing the files inside the initramfs done ***
Nov 24 03:47:56 np0005533252 dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 03:47:56 np0005533252 dracut[1285]: *** Hardlinking files ***
Nov 24 03:47:56 np0005533252 dracut[1285]: *** Hardlinking files done ***
Nov 24 03:47:57 np0005533252 dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 03:47:57 np0005533252 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 24 03:47:57 np0005533252 kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 24 03:47:57 np0005533252 systemd[1]: Finished Crash recovery kernel arming.
Nov 24 03:47:57 np0005533252 systemd[1]: Startup finished in 1.495s (kernel) + 2.291s (initrd) + 17.479s (userspace) = 21.266s.
Nov 24 03:48:06 np0005533252 systemd[1]: Created slice User Slice of UID 1000.
Nov 24 03:48:06 np0005533252 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 03:48:06 np0005533252 systemd-logind[823]: New session 1 of user zuul.
Nov 24 03:48:06 np0005533252 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 03:48:06 np0005533252 systemd[1]: Starting User Manager for UID 1000...
Nov 24 03:48:07 np0005533252 systemd[4299]: Queued start job for default target Main User Target.
Nov 24 03:48:07 np0005533252 systemd[4299]: Created slice User Application Slice.
Nov 24 03:48:07 np0005533252 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 03:48:07 np0005533252 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 03:48:07 np0005533252 systemd[4299]: Reached target Paths.
Nov 24 03:48:07 np0005533252 systemd[4299]: Reached target Timers.
Nov 24 03:48:07 np0005533252 systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 24 03:48:07 np0005533252 systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 24 03:48:07 np0005533252 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 24 03:48:07 np0005533252 systemd[4299]: Reached target Sockets.
Nov 24 03:48:07 np0005533252 systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 24 03:48:07 np0005533252 systemd[4299]: Reached target Basic System.
Nov 24 03:48:07 np0005533252 systemd[4299]: Reached target Main User Target.
Nov 24 03:48:07 np0005533252 systemd[4299]: Startup finished in 115ms.
Nov 24 03:48:07 np0005533252 systemd[1]: Started User Manager for UID 1000.
Nov 24 03:48:07 np0005533252 systemd[1]: Started Session 1 of User zuul.
Nov 24 03:48:07 np0005533252 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 03:48:11 np0005533252 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 03:48:12 np0005533252 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 03:48:12 np0005533252 irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 24 03:48:13 np0005533252 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 03:48:18 np0005533252 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 03:48:19 np0005533252 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 03:48:22 np0005533252 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtGjVEb/lsDO7QcEMCxozreHmfSkbPYtukhJN3wVhqpj6xeUXDPmoULx3/bUoF5EPUMcOV3spnCrShHpk7CaLVLFC6oNrQxPD181TchE78zphBpk8I1ehE8T9c7obAmyKrEcACWMj7F602jB1LiYcFYv4jlfDhyW3uTQnip2LICS2Kfa99lM5/ASVfbkov0rOqv+cDcBEhm9XXnUuxfGF0JDXhqv4Moan3wsyDreG2bhonj0B8vCTteeQ78h13an4IV58Xfard0MCw6jIS9DyQLfwpc3OLaKIMe3CC2oVRB77qysEMlCAEihHk42CgdoK8E/tovexbpxYDVKE2PymKN81ObjmT/CgplB54Mo8icraKe+Q1PzX43HsSi20RnipJFuMU33UpP94PO+WoB11gl03bBmluLjuLt4uV5EmciWyTP/feSffjrkuNiIBwXnGakV1+NRH2S8kMbnITAdJAdL3vn8XkYw9FARF1VW6T8Ft+GxeEEJxt8kii/56xDiM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:22 np0005533252 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:23 np0005533252 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:23 np0005533252 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974102.7075222-252-212114450957091/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa follow=False checksum=1ba3cc8ce402543c463affbc560046c840463cbe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:24 np0005533252 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:24 np0005533252 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974103.7631724-307-182706768082732/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa.pub follow=False checksum=2646cb7a5a0d5a58175bb49a3d139e585d675669 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:25 np0005533252 python3[4971]: ansible-ping Invoked with data=pong
Nov 24 03:48:26 np0005533252 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 03:48:29 np0005533252 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 03:48:31 np0005533252 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:31 np0005533252 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:31 np0005533252 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:32 np0005533252 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:32 np0005533252 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:32 np0005533252 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:34 np0005533252 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:34 np0005533252 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:35 np0005533252 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974114.5221498-32-224612580941032/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:36 np0005533252 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:36 np0005533252 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:36 np0005533252 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:36 np0005533252 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:37 np0005533252 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:37 np0005533252 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:37 np0005533252 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:37 np0005533252 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:38 np0005533252 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:38 np0005533252 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:38 np0005533252 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:38 np0005533252 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:39 np0005533252 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:39 np0005533252 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:39 np0005533252 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:39 np0005533252 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:40 np0005533252 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:40 np0005533252 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:40 np0005533252 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:40 np0005533252 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:41 np0005533252 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:41 np0005533252 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:41 np0005533252 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:41 np0005533252 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:42 np0005533252 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:42 np0005533252 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 03:48:45 np0005533252 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 03:48:45 np0005533252 systemd[1]: Starting Time & Date Service...
Nov 24 03:48:45 np0005533252 systemd[1]: Started Time & Date Service.
Nov 24 03:48:45 np0005533252 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Nov 24 03:48:46 np0005533252 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:46 np0005533252 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:46 np0005533252 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.2427723-252-120902294950723/source _original_basename=tmpmejic5dc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:47 np0005533252 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:47 np0005533252 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.9830725-302-276646997746038/source _original_basename=tmp1awe8xjc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:48 np0005533252 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:48 np0005533252 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974128.0552998-382-7086504848778/source _original_basename=tmpwbiy5m2i follow=False checksum=f07c805834277da0cbee63ff582683dc2ed910d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:49 np0005533252 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:48:49 np0005533252 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:48:49 np0005533252 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:48:50 np0005533252 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974129.638929-452-181732642231147/source _original_basename=tmp6lujrq4x follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:48:50 np0005533252 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-e0db-79ba-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:48:51 np0005533252 python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-e0db-79ba-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 03:48:52 np0005533252 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:49:12 np0005533252 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:49:15 np0005533252 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 03:50:12 np0005533252 systemd-logind[823]: Session 1 logged out. Waiting for processes to exit.
Nov 24 03:50:20 np0005533252 systemd[4299]: Starting Mark boot as successful...
Nov 24 03:50:20 np0005533252 systemd[4299]: Finished Mark boot as successful.
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 03:50:22 np0005533252 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 03:50:22 np0005533252 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8217] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 03:50:22 np0005533252 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8356] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8374] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8377] device (eth1): carrier: link connected
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8378] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8384] policy: auto-activating connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8387] device (eth1): Activation: starting connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8387] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8389] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8391] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 03:50:22 np0005533252 NetworkManager[858]: <info>  [1763974222.8394] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:50:23 np0005533252 systemd-logind[823]: New session 3 of user zuul.
Nov 24 03:50:23 np0005533252 systemd[1]: Started Session 3 of User zuul.
Nov 24 03:50:24 np0005533252 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0f51-775e-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:50:34 np0005533252 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:50:34 np0005533252 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974233.7433674-155-265954914046204/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=a8172c646497a56a59ad1405f1e405cb26f97005 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:50:34 np0005533252 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 03:50:34 np0005533252 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 03:50:34 np0005533252 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 03:50:34 np0005533252 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 03:50:34 np0005533252 systemd[1]: Stopping Network Manager...
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9384] caught SIGTERM, shutting down normally.
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9395] dhcp4 (eth0): canceled DHCP transaction
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9396] dhcp4 (eth0): state changed no lease
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9397] manager: NetworkManager state is now CONNECTING
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9535] dhcp4 (eth1): canceled DHCP transaction
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9535] dhcp4 (eth1): state changed no lease
Nov 24 03:50:34 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 03:50:34 np0005533252 NetworkManager[858]: <info>  [1763974234.9612] exiting (success)
Nov 24 03:50:34 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 03:50:34 np0005533252 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 03:50:34 np0005533252 systemd[1]: Stopped Network Manager.
Nov 24 03:50:34 np0005533252 systemd[1]: Starting Network Manager...
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0093] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0095] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0144] manager[0x559987aa7070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 03:50:35 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 03:50:35 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0721] hostname: hostname: using hostnamed
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0721] hostname: static hostname changed from (none) to "np0005533252.novalocal"
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0724] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0728] manager[0x559987aa7070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0728] manager[0x559987aa7070]: rfkill: WWAN hardware radio set enabled
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0752] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0752] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0753] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0754] manager: Networking is enabled by state file
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0756] settings: Loaded settings plugin: keyfile (internal)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0759] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0785] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0793] dhcp: init: Using DHCP client 'internal'
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0795] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0801] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0806] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0813] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0819] device (eth0): carrier: link connected
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0823] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0827] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0828] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0833] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0838] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0842] device (eth1): carrier: link connected
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0846] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0849] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366) (indicated)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0849] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0853] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0858] device (eth1): Activation: starting connection 'Wired connection 1' (06cf09d6-5a4c-316f-86b1-330e0eaa7366)
Nov 24 03:50:35 np0005533252 systemd[1]: Started Network Manager.
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0864] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0871] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0873] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0874] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0877] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0879] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0880] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0883] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0885] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0891] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0894] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0901] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0903] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0920] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0922] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0926] device (lo): Activation: successful, device activated.
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0932] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0937] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.0998] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 systemd[1]: Starting Network Manager Wait Online...
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.1059] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.1062] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.1065] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.1070] device (eth0): Activation: successful, device activated.
Nov 24 03:50:35 np0005533252 NetworkManager[7190]: <info>  [1763974235.1076] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 03:50:35 np0005533252 python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0f51-775e-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:50:45 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 03:51:05 np0005533252 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 03:51:06 np0005533252 chronyd[831]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.3918] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 03:51:20 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 03:51:20 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4153] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4156] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4165] device (eth1): Activation: successful, device activated.
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4175] manager: startup complete
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4179] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <warn>  [1763974280.4188] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4198] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 systemd[1]: Finished Network Manager Wait Online.
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4275] dhcp4 (eth1): canceled DHCP transaction
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4275] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4276] dhcp4 (eth1): state changed no lease
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4294] policy: auto-activating connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4299] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4300] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4303] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4312] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4320] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4368] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4370] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 03:51:20 np0005533252 NetworkManager[7190]: <info>  [1763974280.4375] device (eth1): Activation: successful, device activated.
Nov 24 03:51:30 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 03:51:35 np0005533252 systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 03:51:35 np0005533252 systemd[1]: session-3.scope: Consumed 1.388s CPU time.
Nov 24 03:51:35 np0005533252 systemd-logind[823]: Session 3 logged out. Waiting for processes to exit.
Nov 24 03:51:35 np0005533252 systemd-logind[823]: Removed session 3.
Nov 24 03:52:31 np0005533252 systemd-logind[823]: New session 4 of user zuul.
Nov 24 03:52:31 np0005533252 systemd[1]: Started Session 4 of User zuul.
Nov 24 03:52:31 np0005533252 python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:52:32 np0005533252 python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974351.7055833-373-3622969103097/source _original_basename=tmpd772uug3 follow=False checksum=a3ebf95cc3e4718aba4e7a218d4b9424c08a2ec8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:52:35 np0005533252 systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 03:52:35 np0005533252 systemd-logind[823]: Session 4 logged out. Waiting for processes to exit.
Nov 24 03:52:35 np0005533252 systemd-logind[823]: Removed session 4.
Nov 24 03:53:20 np0005533252 systemd[4299]: Created slice User Background Tasks Slice.
Nov 24 03:53:20 np0005533252 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 03:53:20 np0005533252 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 03:57:49 np0005533252 systemd-logind[823]: New session 5 of user zuul.
Nov 24 03:57:49 np0005533252 systemd[1]: Started Session 5 of User zuul.
Nov 24 03:57:50 np0005533252 python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:50 np0005533252 python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:50 np0005533252 python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:51 np0005533252 python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:51 np0005533252 python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:52 np0005533252 python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:52 np0005533252 python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 03:57:53 np0005533252 python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974672.7094579-509-24398722470962/source _original_basename=tmpzak2sfrq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 03:57:54 np0005533252 python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 03:57:54 np0005533252 systemd[1]: Reloading.
Nov 24 03:57:54 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 03:57:56 np0005533252 python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 03:57:56 np0005533252 python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:56 np0005533252 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:57 np0005533252 python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:57 np0005533252 python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:58 np0005533252 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd9-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 03:57:58 np0005533252 python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 03:58:01 np0005533252 systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 03:58:01 np0005533252 systemd[1]: session-5.scope: Consumed 3.821s CPU time.
Nov 24 03:58:01 np0005533252 systemd-logind[823]: Session 5 logged out. Waiting for processes to exit.
Nov 24 03:58:01 np0005533252 systemd-logind[823]: Removed session 5.
Nov 24 03:58:03 np0005533252 systemd-logind[823]: New session 6 of user zuul.
Nov 24 03:58:03 np0005533252 systemd[1]: Started Session 6 of User zuul.
Nov 24 03:58:03 np0005533252 python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 03:58:17 np0005533252 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 03:58:17 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 03:58:26 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 03:58:35 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 03:58:36 np0005533252 setsebool[8164]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 03:58:36 np0005533252 setsebool[8164]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 03:58:47 np0005533252 kernel: SELinux:  Converting 388 SID table entries...
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 03:58:47 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 03:59:05 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 03:59:05 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 03:59:05 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 03:59:05 np0005533252 systemd[1]: Reloading.
Nov 24 03:59:05 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 03:59:05 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 03:59:42 np0005533252 irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 03:59:42 np0005533252 irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 24 03:59:44 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 03:59:44 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 03:59:44 np0005533252 systemd[1]: man-db-cache-update.service: Consumed 45.952s CPU time.
Nov 24 03:59:44 np0005533252 systemd[1]: run-r03dbc781c11649fba267c33d387bf279.service: Deactivated successfully.
Nov 24 03:59:59 np0005533252 python3[29485]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-41c3-2628-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:00:00 np0005533252 kernel: evm: overlay not supported
Nov 24 04:00:00 np0005533252 systemd[4299]: Starting D-Bus User Message Bus...
Nov 24 04:00:00 np0005533252 dbus-broker-launch[29544]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 04:00:00 np0005533252 dbus-broker-launch[29544]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 04:00:00 np0005533252 systemd[4299]: Started D-Bus User Message Bus.
Nov 24 04:00:00 np0005533252 dbus-broker-lau[29544]: Ready
Nov 24 04:00:00 np0005533252 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 04:00:00 np0005533252 systemd[4299]: Created slice Slice /user.
Nov 24 04:00:00 np0005533252 systemd[4299]: podman-29524.scope: unit configures an IP firewall, but not running as root.
Nov 24 04:00:00 np0005533252 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 04:00:00 np0005533252 systemd[4299]: Started podman-29524.scope.
Nov 24 04:00:00 np0005533252 systemd[4299]: Started podman-pause-c09f1c97.scope.
Nov 24 04:00:01 np0005533252 python3[29572]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.16:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.16:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:00:01 np0005533252 python3[29572]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 04:00:02 np0005533252 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 04:00:02 np0005533252 systemd[1]: session-6.scope: Consumed 58.495s CPU time.
Nov 24 04:00:02 np0005533252 systemd-logind[823]: Session 6 logged out. Waiting for processes to exit.
Nov 24 04:00:02 np0005533252 systemd-logind[823]: Removed session 6.
Nov 24 04:00:20 np0005533252 systemd[1]: Starting dnf makecache...
Nov 24 04:00:20 np0005533252 dnf[29573]: Failed determining last makecache time.
Nov 24 04:00:20 np0005533252 dnf[29573]: CentOS Stream 9 - BaseOS                         27 kB/s | 7.3 kB     00:00
Nov 24 04:00:20 np0005533252 dnf[29573]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 24 04:00:21 np0005533252 dnf[29573]: CentOS Stream 9 - CRB                            76 kB/s | 7.2 kB     00:00
Nov 24 04:00:21 np0005533252 dnf[29573]: CentOS Stream 9 - Extras packages                26 kB/s | 8.3 kB     00:00
Nov 24 04:00:21 np0005533252 dnf[29573]: Metadata cache created.
Nov 24 04:00:21 np0005533252 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 04:00:21 np0005533252 systemd[1]: Finished dnf makecache.
Nov 24 04:00:27 np0005533252 systemd-logind[823]: New session 7 of user zuul.
Nov 24 04:00:27 np0005533252 systemd[1]: Started Session 7 of User zuul.
Nov 24 04:00:27 np0005533252 python3[29616]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 04:00:27 np0005533252 python3[29642]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 04:00:28 np0005533252 python3[29668]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005533252.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 04:00:29 np0005533252 python3[29702]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 04:00:29 np0005533252 python3[29780]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:00:30 np0005533252 python3[29853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974829.3883047-169-193705438020268/source _original_basename=tmp3ujkehpw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:00:31 np0005533252 python3[29903]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 24 04:00:31 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 04:00:31 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 04:00:31 np0005533252 systemd-hostnamed[29907]: Changed pretty hostname to 'compute-1'
Nov 24 04:00:31 np0005533252 systemd-hostnamed[29907]: Hostname set to <compute-1> (static)
Nov 24 04:00:31 np0005533252 NetworkManager[7190]: <info>  [1763974831.2432] hostname: static hostname changed from "np0005533252.novalocal" to "compute-1"
Nov 24 04:00:31 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 04:00:31 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 04:00:31 np0005533252 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 04:00:31 np0005533252 systemd[1]: session-7.scope: Consumed 2.340s CPU time.
Nov 24 04:00:31 np0005533252 systemd-logind[823]: Session 7 logged out. Waiting for processes to exit.
Nov 24 04:00:31 np0005533252 systemd-logind[823]: Removed session 7.
Nov 24 04:00:41 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 04:01:01 np0005533252 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 04:02:53 np0005533252 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 04:02:53 np0005533252 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 04:02:53 np0005533252 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 04:02:53 np0005533252 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 04:04:28 np0005533252 systemd-logind[823]: New session 8 of user zuul.
Nov 24 04:04:28 np0005533252 systemd[1]: Started Session 8 of User zuul.
Nov 24 04:04:28 np0005533252 python3[30028]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:04:30 np0005533252 python3[30144]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:31 np0005533252 python3[30217]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:31 np0005533252 python3[30243]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:31 np0005533252 python3[30316]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:31 np0005533252 python3[30342]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:32 np0005533252 python3[30415]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:32 np0005533252 python3[30441]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:32 np0005533252 python3[30514]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:33 np0005533252 python3[30540]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:33 np0005533252 python3[30613]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:33 np0005533252 python3[30639]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:33 np0005533252 python3[30712]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:34 np0005533252 python3[30738]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:04:34 np0005533252 python3[30811]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.5090714-33950-110262530512426/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:04:47 np0005533252 python3[30859]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:09:47 np0005533252 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 04:09:47 np0005533252 systemd[1]: session-8.scope: Consumed 4.431s CPU time.
Nov 24 04:09:47 np0005533252 systemd-logind[823]: Session 8 logged out. Waiting for processes to exit.
Nov 24 04:09:47 np0005533252 systemd-logind[823]: Removed session 8.
Nov 24 04:16:27 np0005533252 systemd-logind[823]: New session 9 of user zuul.
Nov 24 04:16:27 np0005533252 systemd[1]: Started Session 9 of User zuul.
Nov 24 04:16:28 np0005533252 python3.9[31020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:16:30 np0005533252 python3.9[31201]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:16:37 np0005533252 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 04:16:37 np0005533252 systemd[1]: session-9.scope: Consumed 7.584s CPU time.
Nov 24 04:16:37 np0005533252 systemd-logind[823]: Session 9 logged out. Waiting for processes to exit.
Nov 24 04:16:37 np0005533252 systemd-logind[823]: Removed session 9.
Nov 24 04:16:52 np0005533252 systemd-logind[823]: New session 10 of user zuul.
Nov 24 04:16:52 np0005533252 systemd[1]: Started Session 10 of User zuul.
Nov 24 04:16:53 np0005533252 python3.9[31413]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 04:16:54 np0005533252 python3.9[31587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:16:56 np0005533252 python3.9[31739]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:16:57 np0005533252 python3.9[31893]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:16:58 np0005533252 python3.9[32045]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:16:58 np0005533252 python3.9[32197]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:16:59 np0005533252 python3.9[32320]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763975818.512851-178-28972882223475/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:17:00 np0005533252 python3.9[32472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:17:01 np0005533252 python3.9[32628]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:17:01 np0005533252 python3.9[32780]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:17:02 np0005533252 python3.9[32930]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:17:06 np0005533252 python3.9[33183]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:17:07 np0005533252 python3.9[33333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:17:08 np0005533252 python3.9[33487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:17:09 np0005533252 python3.9[33645]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:17:10 np0005533252 python3.9[33729]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:17:54 np0005533252 systemd[1]: Reloading.
Nov 24 04:17:54 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:17:54 np0005533252 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 04:17:55 np0005533252 systemd[1]: Reloading.
Nov 24 04:17:55 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:17:55 np0005533252 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 04:17:55 np0005533252 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 04:17:55 np0005533252 systemd[1]: Reloading.
Nov 24 04:17:55 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:17:55 np0005533252 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 04:17:55 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:17:55 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:17:55 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:18:56 np0005533252 kernel: SELinux:  Converting 2719 SID table entries...
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:18:56 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:18:57 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 04:18:57 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:18:57 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:18:57 np0005533252 systemd[1]: Reloading.
Nov 24 04:18:57 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:18:57 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:18:58 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:18:58 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:18:58 np0005533252 systemd[1]: man-db-cache-update.service: Consumed 1.015s CPU time.
Nov 24 04:18:58 np0005533252 systemd[1]: run-r350dbfdd1cf44a9ea168072e3ab10b75.service: Deactivated successfully.
Nov 24 04:19:11 np0005533252 python3.9[35253]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:19:13 np0005533252 python3.9[35534]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 04:19:14 np0005533252 python3.9[35686]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 04:19:17 np0005533252 python3.9[35839]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:19:18 np0005533252 python3.9[35991]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 04:19:19 np0005533252 python3.9[36143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:19:22 np0005533252 python3.9[36295]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:19:23 np0005533252 python3.9[36418]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763975960.1317418-667-137260493172405/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:19:24 np0005533252 python3.9[36570]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:19:25 np0005533252 python3.9[36722]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:19:26 np0005533252 python3.9[36875]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:19:27 np0005533252 python3.9[37027]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 04:19:27 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:19:27 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:19:28 np0005533252 python3.9[37181]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:19:29 np0005533252 python3.9[37339]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 04:19:30 np0005533252 python3.9[37499]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 04:19:30 np0005533252 python3.9[37652]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:19:31 np0005533252 python3.9[37810]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 04:19:32 np0005533252 python3.9[37962]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:19:35 np0005533252 python3.9[38115]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:19:36 np0005533252 python3.9[38267]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:19:36 np0005533252 python3.9[38390]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975975.5424674-1025-218862169985989/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:19:37 np0005533252 python3.9[38542]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:19:37 np0005533252 systemd[1]: Starting Load Kernel Modules...
Nov 24 04:19:37 np0005533252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 04:19:37 np0005533252 kernel: Bridge firewalling registered
Nov 24 04:19:37 np0005533252 systemd-modules-load[38546]: Inserted module 'br_netfilter'
Nov 24 04:19:37 np0005533252 systemd[1]: Finished Load Kernel Modules.
Nov 24 04:19:38 np0005533252 python3.9[38701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:19:39 np0005533252 python3.9[38824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975978.0839388-1093-3652533125208/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:19:40 np0005533252 python3.9[38976]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:19:42 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:19:43 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:19:43 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:19:43 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:19:43 np0005533252 systemd[1]: Reloading.
Nov 24 04:19:43 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:19:43 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:19:46 np0005533252 python3.9[42308]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:19:46 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:19:46 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:19:46 np0005533252 systemd[1]: man-db-cache-update.service: Consumed 4.168s CPU time.
Nov 24 04:19:46 np0005533252 systemd[1]: run-r19c6059400c6443391e1ed9fb4205468.service: Deactivated successfully.
Nov 24 04:19:47 np0005533252 python3.9[42834]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 04:19:47 np0005533252 python3.9[42984]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:19:48 np0005533252 python3.9[43136]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:19:49 np0005533252 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 04:19:49 np0005533252 systemd[1]: Starting Authorization Manager...
Nov 24 04:19:49 np0005533252 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 04:19:49 np0005533252 polkitd[43353]: Started polkitd version 0.117
Nov 24 04:19:49 np0005533252 systemd[1]: Started Authorization Manager.
Nov 24 04:19:50 np0005533252 python3.9[43523]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:19:50 np0005533252 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 04:19:50 np0005533252 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 04:19:50 np0005533252 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 04:19:50 np0005533252 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 04:19:50 np0005533252 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 04:19:51 np0005533252 python3.9[43685]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 04:19:55 np0005533252 python3.9[43837]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:19:55 np0005533252 systemd[1]: Reloading.
Nov 24 04:19:55 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:19:56 np0005533252 python3.9[44026]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:19:56 np0005533252 systemd[1]: Reloading.
Nov 24 04:19:56 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:19:57 np0005533252 python3.9[44216]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:19:57 np0005533252 python3.9[44369]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:19:57 np0005533252 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 04:19:58 np0005533252 python3.9[44522]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:20:00 np0005533252 python3.9[44684]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:20:01 np0005533252 python3.9[44837]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:20:01 np0005533252 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 04:20:01 np0005533252 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 04:20:01 np0005533252 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 04:20:01 np0005533252 systemd[1]: Starting Apply Kernel Variables...
Nov 24 04:20:01 np0005533252 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 04:20:01 np0005533252 systemd[1]: Finished Apply Kernel Variables.
Nov 24 04:20:02 np0005533252 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 04:20:02 np0005533252 systemd[1]: session-10.scope: Consumed 2min 7.932s CPU time.
Nov 24 04:20:02 np0005533252 systemd-logind[823]: Session 10 logged out. Waiting for processes to exit.
Nov 24 04:20:02 np0005533252 systemd-logind[823]: Removed session 10.
Nov 24 04:20:08 np0005533252 systemd-logind[823]: New session 11 of user zuul.
Nov 24 04:20:08 np0005533252 systemd[1]: Started Session 11 of User zuul.
Nov 24 04:20:09 np0005533252 python3.9[45021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:20:10 np0005533252 python3.9[45177]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 04:20:11 np0005533252 python3.9[45330]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:20:12 np0005533252 python3.9[45488]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 04:20:13 np0005533252 python3.9[45648]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:20:14 np0005533252 python3.9[45732]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 04:20:17 np0005533252 python3.9[45896]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:20:28 np0005533252 kernel: SELinux:  Converting 2731 SID table entries...
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:20:28 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:20:28 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 04:20:28 np0005533252 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 04:20:29 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:20:29 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:20:29 np0005533252 systemd[1]: Reloading.
Nov 24 04:20:29 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:20:29 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:20:29 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:20:30 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:20:30 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:20:30 np0005533252 systemd[1]: run-r10bde9b927b2472fbc8a3af1356e8ccd.service: Deactivated successfully.
Nov 24 04:20:31 np0005533252 python3.9[46996]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:20:32 np0005533252 systemd[1]: Reloading.
Nov 24 04:20:32 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:20:32 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:20:32 np0005533252 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 04:20:32 np0005533252 chown[47038]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 04:20:32 np0005533252 ovs-ctl[47043]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 04:20:32 np0005533252 ovs-ctl[47043]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-ctl[47043]: Starting ovsdb-server [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-vsctl[47092]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 04:20:32 np0005533252 ovs-vsctl[47108]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"803b139a-7fca-4549-8597-645cf677225d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 04:20:32 np0005533252 ovs-ctl[47043]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-ctl[47043]: Enabling remote OVSDB managers [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-vsctl[47118]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 24 04:20:32 np0005533252 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 04:20:32 np0005533252 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 04:20:32 np0005533252 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 04:20:32 np0005533252 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 04:20:32 np0005533252 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 04:20:32 np0005533252 ovs-ctl[47163]: Inserting openvswitch module [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-ctl[47131]: Starting ovs-vswitchd [  OK  ]
Nov 24 04:20:32 np0005533252 ovs-vsctl[47181]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 24 04:20:32 np0005533252 ovs-ctl[47131]: Enabling remote OVSDB managers [  OK  ]
Nov 24 04:20:32 np0005533252 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 04:20:32 np0005533252 systemd[1]: Starting Open vSwitch...
Nov 24 04:20:32 np0005533252 systemd[1]: Finished Open vSwitch.
Nov 24 04:20:33 np0005533252 python3.9[47332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:20:35 np0005533252 python3.9[47484]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 04:20:36 np0005533252 kernel: SELinux:  Converting 2745 SID table entries...
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:20:36 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:20:37 np0005533252 python3.9[47639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:20:38 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 04:20:38 np0005533252 python3.9[47797]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:20:40 np0005533252 python3.9[47950]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:20:42 np0005533252 python3.9[48237]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 04:20:43 np0005533252 python3.9[48387]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:20:43 np0005533252 python3.9[48541]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:20:45 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:20:45 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:20:45 np0005533252 systemd[1]: Reloading.
Nov 24 04:20:45 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:20:45 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:20:45 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:20:46 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:20:46 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:20:46 np0005533252 systemd[1]: run-r068d288e9acf47acb77677aa89baf31b.service: Deactivated successfully.
Nov 24 04:20:47 np0005533252 python3.9[48857]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:20:47 np0005533252 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 04:20:47 np0005533252 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 04:20:47 np0005533252 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 04:20:47 np0005533252 systemd[1]: Stopping Network Manager...
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.0921] caught SIGTERM, shutting down normally.
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): canceled DHCP transaction
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.0937] dhcp4 (eth0): state changed no lease
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.0939] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 04:20:47 np0005533252 NetworkManager[7190]: <info>  [1763976047.1021] exiting (success)
Nov 24 04:20:47 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 04:20:47 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 04:20:47 np0005533252 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 04:20:47 np0005533252 systemd[1]: Stopped Network Manager.
Nov 24 04:20:47 np0005533252 systemd[1]: NetworkManager.service: Consumed 9.382s CPU time, 4.3M memory peak, read 0B from disk, written 31.5K to disk.
Nov 24 04:20:47 np0005533252 systemd[1]: Starting Network Manager...
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.1859] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e3886539-ea72-4427-b33b-0060f8fadd32)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.1861] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.1914] manager[0x556aaa69a090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 04:20:47 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 04:20:47 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2745] hostname: hostname: using hostnamed
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2747] hostname: static hostname changed from (none) to "compute-1"
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2751] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2755] manager[0x556aaa69a090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2756] manager[0x556aaa69a090]: rfkill: WWAN hardware radio set enabled
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2775] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2782] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2782] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2783] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2783] manager: Networking is enabled by state file
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2785] settings: Loaded settings plugin: keyfile (internal)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2788] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2809] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2817] dhcp: init: Using DHCP client 'internal'
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2819] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2823] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2827] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2834] device (lo): Activation: starting connection 'lo' (3dc9a73f-5008-4d54-b1f5-ae0263930821)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2839] device (eth0): carrier: link connected
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2844] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2848] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2848] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2853] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2858] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2863] device (eth1): carrier: link connected
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2867] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2871] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501) (indicated)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2871] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2875] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2880] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 04:20:47 np0005533252 systemd[1]: Started Network Manager.
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2892] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2899] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2901] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2902] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2904] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2907] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2909] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2911] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2913] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2917] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2926] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2936] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2946] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2948] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2955] device (lo): Activation: successful, device activated.
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2965] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.2973] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 04:20:47 np0005533252 systemd[1]: Starting Network Manager Wait Online...
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3037] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3042] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3047] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3051] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3054] device (eth1): Activation: successful, device activated.
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3061] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3062] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3065] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3068] device (eth0): Activation: successful, device activated.
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3071] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 04:20:47 np0005533252 NetworkManager[48870]: <info>  [1763976047.3073] manager: startup complete
Nov 24 04:20:47 np0005533252 systemd[1]: Finished Network Manager Wait Online.
Nov 24 04:20:48 np0005533252 python3.9[49083]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:20:52 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:20:52 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:20:52 np0005533252 systemd[1]: Reloading.
Nov 24 04:20:52 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:20:52 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:20:52 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:20:53 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:20:53 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:20:53 np0005533252 systemd[1]: run-r0618bd78c0cd45cd976e1acc0f0b9c7b.service: Deactivated successfully.
Nov 24 04:20:54 np0005533252 python3.9[49543]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:20:55 np0005533252 python3.9[49695]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:20:56 np0005533252 python3.9[49849]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:20:56 np0005533252 python3.9[50001]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:20:57 np0005533252 python3.9[50153]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:20:57 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 04:20:57 np0005533252 python3.9[50305]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:20:58 np0005533252 python3.9[50457]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:20:59 np0005533252 python3.9[50580]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976058.335315-648-9885838827148/.source _original_basename=.51bezzs2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:00 np0005533252 python3.9[50732]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:00 np0005533252 python3.9[50884]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 04:21:01 np0005533252 python3.9[51036]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:04 np0005533252 python3.9[51463]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 04:21:05 np0005533252 ansible-async_wrapper.py[51638]: Invoked with j48334358143 300 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.3764477-846-45376300257665/AnsiballZ_edpm_os_net_config.py _
Nov 24 04:21:05 np0005533252 ansible-async_wrapper.py[51641]: Starting module and watcher
Nov 24 04:21:05 np0005533252 ansible-async_wrapper.py[51641]: Start watching 51642 (300)
Nov 24 04:21:05 np0005533252 ansible-async_wrapper.py[51642]: Start module (51642)
Nov 24 04:21:05 np0005533252 ansible-async_wrapper.py[51638]: Return async_wrapper task started.
Nov 24 04:21:05 np0005533252 python3.9[51643]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 04:21:06 np0005533252 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 04:21:06 np0005533252 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 04:21:06 np0005533252 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 04:21:06 np0005533252 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 04:21:06 np0005533252 kernel: cfg80211: failed to load regulatory.db
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1331] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1345] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1844] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1845] audit: op="connection-add" uuid="d35f6803-0e92-4bfd-97f1-ccee68d7d040" name="br-ex-br" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1860] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1861] audit: op="connection-add" uuid="95cfb4ea-b324-4465-8750-11bbf20cc936" name="br-ex-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1873] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1874] audit: op="connection-add" uuid="77db1bad-0624-4996-a4d3-ef8dfa37fc78" name="eth1-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1885] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1886] audit: op="connection-add" uuid="7d7fc543-b162-40ec-b741-b4c932a38070" name="vlan20-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1897] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1898] audit: op="connection-add" uuid="07d677b0-58d6-4f26-a17d-bb6fe216da22" name="vlan21-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1910] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1911] audit: op="connection-add" uuid="38328766-b825-4c80-8aaa-43c40d6b880d" name="vlan22-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1922] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1923] audit: op="connection-add" uuid="7484f959-1f97-4645-ae70-84a4a9412fd4" name="vlan23-port" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1942] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1959] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1960] audit: op="connection-add" uuid="378995cd-982e-444e-b7a0-5d63ee4845e3" name="br-ex-if" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.1997] audit: op="connection-update" uuid="eed6ff3f-ed68-533f-b181-f50564eca501" name="ci-private-network" args="ipv6.routing-rules,ipv6.routes,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,connection.port-type,connection.master,connection.controller,connection.slave-type,connection.timestamp,ovs-external-ids.data,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.routes,ovs-interface.type" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2012] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2014] audit: op="connection-add" uuid="1d905685-a79b-4db1-b617-8a5901f95b97" name="vlan20-if" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2029] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2032] audit: op="connection-add" uuid="b3e069b4-e07e-4c8c-934d-0f85b6caf1ac" name="vlan21-if" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2048] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2049] audit: op="connection-add" uuid="4d469ff9-403d-47f3-8256-03ede699020a" name="vlan22-if" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2065] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2066] audit: op="connection-add" uuid="b0b77b95-bf5f-405b-b2cb-4411bf049b86" name="vlan23-if" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2077] audit: op="connection-delete" uuid="06cf09d6-5a4c-316f-86b1-330e0eaa7366" name="Wired connection 1" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2089] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2099] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2102] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d35f6803-0e92-4bfd-97f1-ccee68d7d040)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2103] audit: op="connection-activate" uuid="d35f6803-0e92-4bfd-97f1-ccee68d7d040" name="br-ex-br" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2104] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2110] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2114] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (95cfb4ea-b324-4465-8750-11bbf20cc936)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2116] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2121] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2124] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (77db1bad-0624-4996-a4d3-ef8dfa37fc78)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2126] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2132] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2135] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7d7fc543-b162-40ec-b741-b4c932a38070)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2137] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2142] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2146] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (07d677b0-58d6-4f26-a17d-bb6fe216da22)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2148] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2153] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2157] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (38328766-b825-4c80-8aaa-43c40d6b880d)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2159] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2165] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2169] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (7484f959-1f97-4645-ae70-84a4a9412fd4)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2176] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2180] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2182] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2188] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2193] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2196] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (378995cd-982e-444e-b7a0-5d63ee4845e3)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2197] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2200] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2201] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2202] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2203] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2213] device (eth1): disconnecting for new activation request.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2214] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2217] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2218] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2220] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2222] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2226] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2229] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1d905685-a79b-4db1-b617-8a5901f95b97)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2230] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2233] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2234] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2235] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2237] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2242] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2246] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (b3e069b4-e07e-4c8c-934d-0f85b6caf1ac)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2246] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2250] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2252] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2253] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2257] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2262] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2267] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (4d469ff9-403d-47f3-8256-03ede699020a)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2268] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2270] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2273] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2274] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2277] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2281] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2284] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b0b77b95-bf5f-405b-b2cb-4411bf049b86)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2285] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2288] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2290] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2292] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2293] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2306] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2308] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2311] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2313] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2320] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2323] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2327] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2331] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2332] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2337] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2341] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 kernel: ovs-system: entered promiscuous mode
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2343] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2345] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2350] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2355] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 systemd-udevd[51648]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:21:07 np0005533252 kernel: Timeout policy base is empty
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2358] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2360] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2366] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2370] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2373] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2375] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2379] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2383] dhcp4 (eth0): canceled DHCP transaction
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2383] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2384] dhcp4 (eth0): state changed no lease
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2386] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2396] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2400] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51644 uid=0 result="fail" reason="Device is not activated"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2406] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2414] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2421] device (eth1): disconnecting for new activation request.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2422] audit: op="connection-activate" uuid="eed6ff3f-ed68-533f-b181-f50564eca501" name="ci-private-network" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2423] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2426] dhcp4 (eth0): state changed new lease, address=38.129.56.228
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2429] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2483] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 04:21:07 np0005533252 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2556] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 04:21:07 np0005533252 kernel: br-ex: entered promiscuous mode
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2722] device (eth1): Activation: starting connection 'ci-private-network' (eed6ff3f-ed68-533f-b181-f50564eca501)
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2726] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2733] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2735] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2740] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2743] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2750] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2754] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2755] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2756] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2757] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2758] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2768] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2773] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2779] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2781] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2784] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2787] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2790] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2793] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2797] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2799] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2802] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2805] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2808] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2811] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2822] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2830] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 04:21:07 np0005533252 kernel: vlan22: entered promiscuous mode
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2844] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2852] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2856] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2861] device (eth1): Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2867] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2869] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2873] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 kernel: vlan20: entered promiscuous mode
Nov 24 04:21:07 np0005533252 systemd-udevd[51649]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:21:07 np0005533252 kernel: vlan23: entered promiscuous mode
Nov 24 04:21:07 np0005533252 systemd-udevd[51647]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.2989] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3005] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3018] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 04:21:07 np0005533252 kernel: vlan21: entered promiscuous mode
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3031] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3048] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3050] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3057] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3070] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3083] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3116] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3117] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3121] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3127] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3131] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3138] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3148] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3157] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3193] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3195] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 04:21:07 np0005533252 NetworkManager[48870]: <info>  [1763976067.3203] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 04:21:08 np0005533252 NetworkManager[48870]: <info>  [1763976068.4497] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 04:21:08 np0005533252 NetworkManager[48870]: <info>  [1763976068.6027] checkpoint[0x556aaa670950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 04:21:08 np0005533252 NetworkManager[48870]: <info>  [1763976068.6030] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51644 uid=0 result="success"
Nov 24 04:21:08 np0005533252 NetworkManager[48870]: <info>  [1763976068.8968] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 04:21:08 np0005533252 NetworkManager[48870]: <info>  [1763976068.8976] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 04:21:09 np0005533252 python3.9[52004]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=status _async_dir=/root/.ansible_async
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.0658] audit: op="networking-control" arg="global-dns-configuration" pid=51644 uid=0 result="success"
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.0684] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.0710] audit: op="networking-control" arg="global-dns-configuration" pid=51644 uid=0 result="success"
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.0736] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.2418] checkpoint[0x556aaa670a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 04:21:09 np0005533252 NetworkManager[48870]: <info>  [1763976069.2423] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51644 uid=0 result="success"
Nov 24 04:21:09 np0005533252 ansible-async_wrapper.py[51642]: Module complete (51642)
Nov 24 04:21:10 np0005533252 ansible-async_wrapper.py[51641]: Done in kid B.
Nov 24 04:21:12 np0005533252 python3.9[52108]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=status _async_dir=/root/.ansible_async
Nov 24 04:21:12 np0005533252 python3.9[52208]: ansible-ansible.legacy.async_status Invoked with jid=j48334358143.51638 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 04:21:13 np0005533252 python3.9[52360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:21:14 np0005533252 python3.9[52483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976073.2285266-927-218480229849270/.source.returncode _original_basename=.1654v2hd follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:15 np0005533252 python3.9[52635]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:21:15 np0005533252 python3.9[52758]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976074.5970228-975-185306397415548/.source.cfg _original_basename=.310nocc9 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:16 np0005533252 python3.9[52911]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:21:16 np0005533252 systemd[1]: Reloading Network Manager...
Nov 24 04:21:16 np0005533252 NetworkManager[48870]: <info>  [1763976076.4419] audit: op="reload" arg="0" pid=52915 uid=0 result="success"
Nov 24 04:21:16 np0005533252 NetworkManager[48870]: <info>  [1763976076.4424] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 04:21:16 np0005533252 systemd[1]: Reloaded Network Manager.
Nov 24 04:21:16 np0005533252 systemd-logind[823]: Session 11 logged out. Waiting for processes to exit.
Nov 24 04:21:16 np0005533252 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 04:21:16 np0005533252 systemd[1]: session-11.scope: Consumed 46.611s CPU time.
Nov 24 04:21:16 np0005533252 systemd-logind[823]: Removed session 11.
Nov 24 04:21:17 np0005533252 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 04:21:22 np0005533252 systemd-logind[823]: New session 12 of user zuul.
Nov 24 04:21:22 np0005533252 systemd[1]: Started Session 12 of User zuul.
Nov 24 04:21:23 np0005533252 python3.9[53101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:21:24 np0005533252 python3.9[53255]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:21:25 np0005533252 python3.9[53449]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:21:26 np0005533252 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 04:21:26 np0005533252 systemd[1]: session-12.scope: Consumed 2.237s CPU time.
Nov 24 04:21:26 np0005533252 systemd-logind[823]: Session 12 logged out. Waiting for processes to exit.
Nov 24 04:21:26 np0005533252 systemd-logind[823]: Removed session 12.
Nov 24 04:21:26 np0005533252 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 04:21:31 np0005533252 systemd-logind[823]: New session 13 of user zuul.
Nov 24 04:21:31 np0005533252 systemd[1]: Started Session 13 of User zuul.
Nov 24 04:21:32 np0005533252 python3.9[53631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:21:33 np0005533252 python3.9[53785]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:21:34 np0005533252 python3.9[53941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:21:35 np0005533252 python3.9[54026]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:21:37 np0005533252 python3.9[54179]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:21:39 np0005533252 python3.9[54375]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:39 np0005533252 python3.9[54527]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:21:39 np0005533252 podman[54528]: 2025-11-24 09:21:39.983598565 +0000 UTC m=+0.043513895 system refresh
Nov 24 04:21:40 np0005533252 python3.9[54690]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:21:40 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:21:41 np0005533252 python3.9[54813]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976100.317121-198-35324544005521/.source.json follow=False _original_basename=podman_network_config.j2 checksum=35abbe77809912ec8de56cd1324b6ed1d7c68760 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:21:43 np0005533252 python3.9[54965]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:21:43 np0005533252 python3.9[55088]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976102.617085-244-26992810290010/.source.conf follow=False _original_basename=registries.conf.j2 checksum=d119d0981ddb964361aab9d45fb39837ba29c925 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:21:44 np0005533252 python3.9[55240]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:21:45 np0005533252 python3.9[55392]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:21:45 np0005533252 python3.9[55544]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:21:46 np0005533252 python3.9[55696]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:21:47 np0005533252 python3.9[55848]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:21:49 np0005533252 python3.9[56001]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:21:50 np0005533252 python3.9[56155]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:21:51 np0005533252 python3.9[56307]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:21:52 np0005533252 python3.9[56459]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:21:53 np0005533252 python3.9[56612]: ansible-service_facts Invoked
Nov 24 04:21:53 np0005533252 network[56629]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:21:53 np0005533252 network[56630]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:21:53 np0005533252 network[56631]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:21:59 np0005533252 python3.9[57083]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:22:02 np0005533252 python3.9[57236]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 04:22:03 np0005533252 python3.9[57388]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:04 np0005533252 python3.9[57513]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976123.1217928-675-217674195329042/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:05 np0005533252 python3.9[57667]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:05 np0005533252 python3.9[57792]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976124.6239252-721-270838777346671/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:07 np0005533252 python3.9[57946]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:08 np0005533252 python3.9[58100]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:22:09 np0005533252 python3.9[58184]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:11 np0005533252 python3.9[58338]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:22:12 np0005533252 python3.9[58422]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:22:12 np0005533252 chronyd[831]: chronyd exiting
Nov 24 04:22:12 np0005533252 systemd[1]: Stopping NTP client/server...
Nov 24 04:22:12 np0005533252 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 04:22:12 np0005533252 systemd[1]: Stopped NTP client/server.
Nov 24 04:22:12 np0005533252 systemd[1]: Starting NTP client/server...
Nov 24 04:22:12 np0005533252 chronyd[58430]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 04:22:12 np0005533252 chronyd[58430]: Frequency -23.792 +/- 0.073 ppm read from /var/lib/chrony/drift
Nov 24 04:22:12 np0005533252 chronyd[58430]: Loaded seccomp filter (level 2)
Nov 24 04:22:12 np0005533252 systemd[1]: Started NTP client/server.
Nov 24 04:22:12 np0005533252 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 04:22:12 np0005533252 systemd[1]: session-13.scope: Consumed 24.028s CPU time.
Nov 24 04:22:12 np0005533252 systemd-logind[823]: Session 13 logged out. Waiting for processes to exit.
Nov 24 04:22:12 np0005533252 systemd-logind[823]: Removed session 13.
Nov 24 04:22:17 np0005533252 systemd-logind[823]: New session 14 of user zuul.
Nov 24 04:22:17 np0005533252 systemd[1]: Started Session 14 of User zuul.
Nov 24 04:22:18 np0005533252 python3.9[58611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:19 np0005533252 python3.9[58763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:20 np0005533252 python3.9[58886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976138.7907178-63-226891339846700/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:20 np0005533252 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 04:22:20 np0005533252 systemd[1]: session-14.scope: Consumed 1.484s CPU time.
Nov 24 04:22:20 np0005533252 systemd-logind[823]: Session 14 logged out. Waiting for processes to exit.
Nov 24 04:22:20 np0005533252 systemd-logind[823]: Removed session 14.
Nov 24 04:22:25 np0005533252 systemd-logind[823]: New session 15 of user zuul.
Nov 24 04:22:25 np0005533252 systemd[1]: Started Session 15 of User zuul.
Nov 24 04:22:27 np0005533252 python3.9[59064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:22:28 np0005533252 python3.9[59220]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:29 np0005533252 python3.9[59395]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:29 np0005533252 python3.9[59518]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763976148.4121735-84-172823111442343/.source.json _original_basename=.lcmvl22e follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:30 np0005533252 python3.9[59670]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:31 np0005533252 python3.9[59793]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976150.2118373-153-80396389662123/.source _original_basename=.4vx01pe0 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:31 np0005533252 python3.9[59945]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:22:32 np0005533252 python3.9[60097]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:33 np0005533252 python3.9[60220]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976152.293694-225-175621927301560/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:22:34 np0005533252 python3.9[60372]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:34 np0005533252 python3.9[60495]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976153.4742823-225-237315767649989/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:22:35 np0005533252 python3.9[60647]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:36 np0005533252 python3.9[60799]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:36 np0005533252 python3.9[60922]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976155.6749961-336-136570983211301/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:37 np0005533252 python3.9[61074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:38 np0005533252 python3.9[61197]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976157.065727-381-232516502848711/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:39 np0005533252 python3.9[61349]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:39 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:39 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:39 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:39 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:39 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:39 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:39 np0005533252 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 04:22:39 np0005533252 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 04:22:40 np0005533252 python3.9[61577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:41 np0005533252 python3.9[61700]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976160.0381753-450-26281208721621/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:42 np0005533252 python3.9[61852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:42 np0005533252 python3.9[61975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976161.586404-495-53002319546640/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:43 np0005533252 python3.9[62127]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:43 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:43 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:43 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:43 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:43 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:43 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:43 np0005533252 systemd[1]: Starting Create netns directory...
Nov 24 04:22:43 np0005533252 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 04:22:43 np0005533252 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 04:22:43 np0005533252 systemd[1]: Finished Create netns directory.
Nov 24 04:22:45 np0005533252 python3.9[62354]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:22:45 np0005533252 network[62371]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:22:45 np0005533252 network[62372]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:22:45 np0005533252 network[62373]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:22:50 np0005533252 python3.9[62635]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:50 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:50 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:50 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:50 np0005533252 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 04:22:50 np0005533252 iptables.init[62676]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 04:22:50 np0005533252 iptables.init[62676]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 04:22:50 np0005533252 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 04:22:50 np0005533252 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 04:22:51 np0005533252 python3.9[62872]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:52 np0005533252 python3.9[63026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:22:53 np0005533252 systemd[1]: Reloading.
Nov 24 04:22:53 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:22:53 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:22:53 np0005533252 systemd[1]: Starting Netfilter Tables...
Nov 24 04:22:53 np0005533252 systemd[1]: Finished Netfilter Tables.
Nov 24 04:22:54 np0005533252 python3.9[63217]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:22:56 np0005533252 python3.9[63370]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:22:56 np0005533252 python3.9[63495]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976175.7773805-702-137520813282911/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:22:57 np0005533252 python3.9[63648]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:22:57 np0005533252 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 04:22:57 np0005533252 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 04:22:59 np0005533252 python3.9[63804]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:00 np0005533252 python3.9[63956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:00 np0005533252 python3.9[64079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976179.7181048-795-102538838295799/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:01 np0005533252 python3.9[64231]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 04:23:01 np0005533252 systemd[1]: Starting Time & Date Service...
Nov 24 04:23:02 np0005533252 systemd[1]: Started Time & Date Service.
Nov 24 04:23:02 np0005533252 python3.9[64387]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:03 np0005533252 python3.9[64539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:04 np0005533252 python3.9[64662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976183.057765-900-243609680268420/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:04 np0005533252 python3.9[64814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:05 np0005533252 python3.9[64937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976184.3838298-945-146699640676737/.source.yaml _original_basename=.fscg11hm follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:06 np0005533252 python3.9[65089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:06 np0005533252 python3.9[65212]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976185.672321-990-261392881013558/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:07 np0005533252 python3.9[65364]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:08 np0005533252 python3.9[65517]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:08 np0005533252 python3[65670]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 04:23:09 np0005533252 python3.9[65822]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:10 np0005533252 python3.9[65945]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976189.2337089-1107-184560609197846/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:11 np0005533252 python3.9[66097]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:11 np0005533252 python3.9[66220]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976190.5981045-1152-204616539127448/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:12 np0005533252 python3.9[66372]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:13 np0005533252 python3.9[66495]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976192.020344-1197-36554317248963/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:13 np0005533252 python3.9[66647]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:14 np0005533252 python3.9[66770]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976193.4498825-1242-273305705793837/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:15 np0005533252 python3.9[66922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:23:15 np0005533252 python3.9[67045]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976194.8697999-1287-259037734698884/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:16 np0005533252 python3.9[67197]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:17 np0005533252 python3.9[67349]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:18 np0005533252 python3.9[67508]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:19 np0005533252 python3.9[67661]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:20 np0005533252 python3.9[67813]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:21 np0005533252 python3.9[67965]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 04:23:21 np0005533252 python3.9[68118]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 04:23:22 np0005533252 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 04:23:22 np0005533252 systemd[1]: session-15.scope: Consumed 32.205s CPU time.
Nov 24 04:23:22 np0005533252 systemd-logind[823]: Session 15 logged out. Waiting for processes to exit.
Nov 24 04:23:22 np0005533252 systemd-logind[823]: Removed session 15.
Nov 24 04:23:28 np0005533252 systemd-logind[823]: New session 16 of user zuul.
Nov 24 04:23:28 np0005533252 systemd[1]: Started Session 16 of User zuul.
Nov 24 04:23:28 np0005533252 python3.9[68299]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 04:23:29 np0005533252 python3.9[68451]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:23:31 np0005533252 python3.9[68603]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:23:32 np0005533252 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 04:23:32 np0005533252 python3.9[68755]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=#012 create=True mode=0644 path=/tmp/ansible.lkmmo6bm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:33 np0005533252 python3.9[68909]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lkmmo6bm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:33 np0005533252 python3.9[69063]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lkmmo6bm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:34 np0005533252 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 04:23:34 np0005533252 systemd[1]: session-16.scope: Consumed 3.297s CPU time.
Nov 24 04:23:34 np0005533252 systemd-logind[823]: Session 16 logged out. Waiting for processes to exit.
Nov 24 04:23:34 np0005533252 systemd-logind[823]: Removed session 16.
Nov 24 04:23:40 np0005533252 systemd-logind[823]: New session 17 of user zuul.
Nov 24 04:23:40 np0005533252 systemd[1]: Started Session 17 of User zuul.
Nov 24 04:23:41 np0005533252 python3.9[69241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:23:43 np0005533252 python3.9[69397]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 04:23:43 np0005533252 python3.9[69551]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:23:44 np0005533252 python3.9[69704]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:45 np0005533252 python3.9[69857]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:23:46 np0005533252 python3.9[70011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:47 np0005533252 python3.9[70166]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:23:48 np0005533252 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 04:23:48 np0005533252 systemd[1]: session-17.scope: Consumed 4.082s CPU time.
Nov 24 04:23:48 np0005533252 systemd-logind[823]: Session 17 logged out. Waiting for processes to exit.
Nov 24 04:23:48 np0005533252 systemd-logind[823]: Removed session 17.
Nov 24 04:23:52 np0005533252 systemd-logind[823]: New session 18 of user zuul.
Nov 24 04:23:53 np0005533252 systemd[1]: Started Session 18 of User zuul.
Nov 24 04:23:54 np0005533252 python3.9[70345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:23:55 np0005533252 python3.9[70501]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:23:56 np0005533252 python3.9[70585]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 04:23:58 np0005533252 python3.9[70736]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:23:59 np0005533252 python3.9[70887]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:24:00 np0005533252 python3.9[71037]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:24:00 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:24:00 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:24:00 np0005533252 python3.9[71188]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:24:01 np0005533252 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 04:24:01 np0005533252 systemd[1]: session-18.scope: Consumed 5.468s CPU time.
Nov 24 04:24:01 np0005533252 systemd-logind[823]: Session 18 logged out. Waiting for processes to exit.
Nov 24 04:24:01 np0005533252 systemd-logind[823]: Removed session 18.
Nov 24 04:24:10 np0005533252 systemd-logind[823]: New session 19 of user zuul.
Nov 24 04:24:10 np0005533252 systemd[1]: Started Session 19 of User zuul.
Nov 24 04:24:17 np0005533252 python3[71954]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:24:19 np0005533252 python3[72049]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 04:24:20 np0005533252 python3[72076]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 04:24:21 np0005533252 python3[72102]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:24:21 np0005533252 kernel: loop: module loaded
Nov 24 04:24:21 np0005533252 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 04:24:21 np0005533252 python3[72137]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:24:21 np0005533252 chronyd[58430]: Selected source 23.133.168.246 (pool.ntp.org)
Nov 24 04:24:21 np0005533252 lvm[72140]: PV /dev/loop3 not used.
Nov 24 04:24:21 np0005533252 lvm[72149]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:24:21 np0005533252 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 04:24:22 np0005533252 lvm[72151]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 04:24:22 np0005533252 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 04:24:22 np0005533252 python3[72229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 04:24:22 np0005533252 python3[72302]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976262.2802742-36787-166273025161474/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:24:23 np0005533252 python3[72352]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:24:23 np0005533252 systemd[1]: Reloading.
Nov 24 04:24:23 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:24:23 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:24:24 np0005533252 systemd[1]: Starting Ceph OSD losetup...
Nov 24 04:24:24 np0005533252 bash[72392]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Nov 24 04:24:24 np0005533252 systemd[1]: Finished Ceph OSD losetup.
Nov 24 04:24:24 np0005533252 lvm[72393]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:24:24 np0005533252 lvm[72393]: VG ceph_vg0 finished
Nov 24 04:24:26 np0005533252 python3[72417]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:25:50 np0005533252 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 04:25:50 np0005533252 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 04:25:50 np0005533252 systemd-logind[823]: New session 20 of user ceph-admin.
Nov 24 04:25:50 np0005533252 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 04:25:50 np0005533252 systemd[1]: Starting User Manager for UID 42477...
Nov 24 04:25:51 np0005533252 systemd[72465]: Queued start job for default target Main User Target.
Nov 24 04:25:51 np0005533252 systemd[72465]: Created slice User Application Slice.
Nov 24 04:25:51 np0005533252 systemd[72465]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 04:25:51 np0005533252 systemd[72465]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 04:25:51 np0005533252 systemd[72465]: Reached target Paths.
Nov 24 04:25:51 np0005533252 systemd[72465]: Reached target Timers.
Nov 24 04:25:51 np0005533252 systemd[72465]: Starting D-Bus User Message Bus Socket...
Nov 24 04:25:51 np0005533252 systemd[72465]: Starting Create User's Volatile Files and Directories...
Nov 24 04:25:51 np0005533252 systemd-logind[823]: New session 22 of user ceph-admin.
Nov 24 04:25:51 np0005533252 systemd[72465]: Listening on D-Bus User Message Bus Socket.
Nov 24 04:25:51 np0005533252 systemd[72465]: Finished Create User's Volatile Files and Directories.
Nov 24 04:25:51 np0005533252 systemd[72465]: Reached target Sockets.
Nov 24 04:25:51 np0005533252 systemd[72465]: Reached target Basic System.
Nov 24 04:25:51 np0005533252 systemd[72465]: Reached target Main User Target.
Nov 24 04:25:51 np0005533252 systemd[72465]: Startup finished in 121ms.
Nov 24 04:25:51 np0005533252 systemd[1]: Started User Manager for UID 42477.
Nov 24 04:25:51 np0005533252 systemd[1]: Started Session 20 of User ceph-admin.
Nov 24 04:25:51 np0005533252 systemd[1]: Started Session 22 of User ceph-admin.
Nov 24 04:25:51 np0005533252 systemd-logind[823]: New session 23 of user ceph-admin.
Nov 24 04:25:51 np0005533252 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 04:25:51 np0005533252 systemd-logind[823]: New session 24 of user ceph-admin.
Nov 24 04:25:51 np0005533252 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 04:25:52 np0005533252 systemd-logind[823]: New session 25 of user ceph-admin.
Nov 24 04:25:52 np0005533252 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 04:25:52 np0005533252 systemd-logind[823]: New session 26 of user ceph-admin.
Nov 24 04:25:52 np0005533252 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 04:25:52 np0005533252 systemd-logind[823]: New session 27 of user ceph-admin.
Nov 24 04:25:52 np0005533252 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 04:25:53 np0005533252 systemd-logind[823]: New session 28 of user ceph-admin.
Nov 24 04:25:53 np0005533252 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 04:25:53 np0005533252 systemd-logind[823]: New session 29 of user ceph-admin.
Nov 24 04:25:53 np0005533252 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 04:25:53 np0005533252 systemd-logind[823]: New session 30 of user ceph-admin.
Nov 24 04:25:53 np0005533252 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 04:25:54 np0005533252 systemd-logind[823]: New session 31 of user ceph-admin.
Nov 24 04:25:54 np0005533252 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 04:25:55 np0005533252 systemd-logind[823]: New session 32 of user ceph-admin.
Nov 24 04:25:55 np0005533252 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 04:25:55 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:55 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:56 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:56 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:56 np0005533252 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73039 (sysctl)
Nov 24 04:25:56 np0005533252 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 04:25:56 np0005533252 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 04:25:57 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:57 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:57 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:25:59 np0005533252 systemd[1]: var-lib-containers-storage-overlay-compat2894534498-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.76347605 +0000 UTC m=+15.977551177 container create 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 04:26:13 np0005533252 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1531498207-merged.mount: Deactivated successfully.
Nov 24 04:26:13 np0005533252 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 04:26:13 np0005533252 systemd[1]: Started libpod-conmon-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope.
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.749037736 +0000 UTC m=+15.963112893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:13 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.854716401 +0000 UTC m=+16.068791548 container init 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.866533278 +0000 UTC m=+16.080608405 container start 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.869740598 +0000 UTC m=+16.083815745 container attach 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 04:26:13 np0005533252 reverent_euler[73280]: 167 167
Nov 24 04:26:13 np0005533252 systemd[1]: libpod-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope: Deactivated successfully.
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.87296978 +0000 UTC m=+16.087044907 container died 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:26:13 np0005533252 systemd[1]: var-lib-containers-storage-overlay-3e00e2893bc3f25bf411230888aca3fcc44a7ba62c69e0bf1259f1dbeddf4fc7-merged.mount: Deactivated successfully.
Nov 24 04:26:13 np0005533252 podman[73217]: 2025-11-24 09:26:13.908211645 +0000 UTC m=+16.122286772 container remove 86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 04:26:13 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:13 np0005533252 systemd[1]: libpod-conmon-86422483aa9765a8ef931e3c87031d2a9c9d7f842a81b0ad094bb3b5d96fb500.scope: Deactivated successfully.
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.079602819 +0000 UTC m=+0.042952869 container create 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 04:26:14 np0005533252 systemd[1]: Started libpod-conmon-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope.
Nov 24 04:26:14 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:14 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:14 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.14850673 +0000 UTC m=+0.111856790 container init 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.154308906 +0000 UTC m=+0.117658936 container start 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.157094766 +0000 UTC m=+0.120444806 container attach 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.064176552 +0000 UTC m=+0.027526612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]: [
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:    {
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "available": false,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "being_replaced": false,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "ceph_device_lvm": false,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "lsm_data": {},
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "lvs": [],
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "path": "/dev/sr0",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "rejected_reasons": [
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "Insufficient space (<5GB)",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "Has a FileSystem"
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        ],
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        "sys_api": {
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "actuators": null,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "device_nodes": [
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:                "sr0"
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            ],
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "devname": "sr0",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "human_readable_size": "482.00 KB",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "id_bus": "ata",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "model": "QEMU DVD-ROM",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "nr_requests": "2",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "parent": "/dev/sr0",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "partitions": {},
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "path": "/dev/sr0",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "removable": "1",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "rev": "2.5+",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "ro": "0",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "rotational": "1",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "sas_address": "",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "sas_device_handle": "",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "scheduler_mode": "mq-deadline",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "sectors": 0,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "sectorsize": "2048",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "size": 493568.0,
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "support_discard": "2048",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "type": "disk",
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:            "vendor": "QEMU"
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:        }
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]:    }
Nov 24 04:26:14 np0005533252 objective_sanderson[73322]: ]
Nov 24 04:26:14 np0005533252 systemd[1]: libpod-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope: Deactivated successfully.
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.775897158 +0000 UTC m=+0.739247198 container died 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 04:26:14 np0005533252 systemd[1]: var-lib-containers-storage-overlay-77a674ff89c625a52d8946547f52f06c2b425868a20e7ffd089dfe5be19b5007-merged.mount: Deactivated successfully.
Nov 24 04:26:14 np0005533252 podman[73306]: 2025-11-24 09:26:14.819660097 +0000 UTC m=+0.783010137 container remove 501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 04:26:14 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:14 np0005533252 systemd[1]: libpod-conmon-501c9508cc468cb706ecd43b7409e1abf530767e5730b0dd6c9129dfc9c0d687.scope: Deactivated successfully.
Nov 24 04:26:17 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:17 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.40172466 +0000 UTC m=+0.037586445 container create 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 04:26:17 np0005533252 systemd[1]: Started libpod-conmon-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope.
Nov 24 04:26:17 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.459187163 +0000 UTC m=+0.095048968 container init 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.465206374 +0000 UTC m=+0.101068169 container start 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.467933903 +0000 UTC m=+0.103795718 container attach 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 04:26:17 np0005533252 eager_aryabhata[75322]: 167 167
Nov 24 04:26:17 np0005533252 systemd[1]: libpod-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope: Deactivated successfully.
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.469055671 +0000 UTC m=+0.104917466 container died 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.382942389 +0000 UTC m=+0.018804204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:17 np0005533252 podman[75306]: 2025-11-24 09:26:17.505856055 +0000 UTC m=+0.141717900 container remove 0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Nov 24 04:26:17 np0005533252 systemd[1]: libpod-conmon-0b0431ffa48b4b6a89975a78682580fd250caaad54580e3cfedf55ea68133614.scope: Deactivated successfully.
Nov 24 04:26:17 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:17 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:17 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:17 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:17 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:17 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:17 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:18 np0005533252 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 04:26:18 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:18 np0005533252 systemd[1]: Reached target Ceph cluster 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:26:18 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:18 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:18 np0005533252 systemd[1]: Created slice Slice /system/ceph-84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:26:18 np0005533252 systemd[1]: Reached target System Time Set.
Nov 24 04:26:18 np0005533252 systemd[1]: Reached target System Time Synchronized.
Nov 24 04:26:18 np0005533252 systemd[1]: Starting Ceph crash.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:26:18 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:18 np0005533252 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 04:26:19 np0005533252 podman[75577]: 2025-11-24 09:26:19.135805495 +0000 UTC m=+0.056849569 container create fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:19 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:19 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:19 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2414924c115c84265443f467477ad00ad6ae5d6bfa362464a8ef018c5014825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:19 np0005533252 podman[75577]: 2025-11-24 09:26:19.197127225 +0000 UTC m=+0.118171329 container init fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 04:26:19 np0005533252 podman[75577]: 2025-11-24 09:26:19.104355475 +0000 UTC m=+0.025399629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:19 np0005533252 podman[75577]: 2025-11-24 09:26:19.208245554 +0000 UTC m=+0.129289638 container start fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 04:26:19 np0005533252 bash[75577]: fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc
Nov 24 04:26:19 np0005533252 systemd[1]: Started Ceph crash.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.383+0000 7fa8c0fb7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.383+0000 7fa8c0fb7640 -1 AuthRegistry(0x7fa8bc069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.384+0000 7fa8c0fb7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.384+0000 7fa8c0fb7640 -1 AuthRegistry(0x7fa8c0fb5ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.386+0000 7fa8ba575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: 2025-11-24T09:26:19.386+0000 7fa8c0fb7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 04:26:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1[75592]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 04:26:19 np0005533252 podman[75698]: 2025-11-24 09:26:19.896013529 +0000 UTC m=+0.051540865 container create e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 04:26:19 np0005533252 systemd[1]: Started libpod-conmon-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope.
Nov 24 04:26:19 np0005533252 podman[75698]: 2025-11-24 09:26:19.873944675 +0000 UTC m=+0.029472051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:19 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:19 np0005533252 podman[75698]: 2025-11-24 09:26:19.995936779 +0000 UTC m=+0.151464125 container init e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:20 np0005533252 podman[75698]: 2025-11-24 09:26:20.013471349 +0000 UTC m=+0.168998705 container start e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 04:26:20 np0005533252 podman[75698]: 2025-11-24 09:26:20.017816448 +0000 UTC m=+0.173343834 container attach e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 04:26:20 np0005533252 kind_lewin[75714]: 167 167
Nov 24 04:26:20 np0005533252 systemd[1]: libpod-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope: Deactivated successfully.
Nov 24 04:26:20 np0005533252 podman[75698]: 2025-11-24 09:26:20.024746622 +0000 UTC m=+0.180273968 container died e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:20 np0005533252 systemd[1]: var-lib-containers-storage-overlay-974caae571d38a47cfbcaa396eda3a2c08174dfcdd5db0786b9dfebf4f1aa797-merged.mount: Deactivated successfully.
Nov 24 04:26:20 np0005533252 podman[75698]: 2025-11-24 09:26:20.057110896 +0000 UTC m=+0.212638242 container remove e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_lewin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 04:26:20 np0005533252 systemd[1]: libpod-conmon-e45eea1b0c5479d823be51e52081bfe11a6fe8a21c56f585f1ebd4774698b781.scope: Deactivated successfully.
Nov 24 04:26:20 np0005533252 podman[75739]: 2025-11-24 09:26:20.254507644 +0000 UTC m=+0.051955947 container create 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Nov 24 04:26:20 np0005533252 systemd[1]: Started libpod-conmon-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope.
Nov 24 04:26:20 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:20 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:20 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:20 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:20 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:20 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:20 np0005533252 podman[75739]: 2025-11-24 09:26:20.2360874 +0000 UTC m=+0.033535723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:20 np0005533252 podman[75739]: 2025-11-24 09:26:20.34077266 +0000 UTC m=+0.138220993 container init 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 04:26:20 np0005533252 podman[75739]: 2025-11-24 09:26:20.358519986 +0000 UTC m=+0.155968279 container start 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 04:26:20 np0005533252 podman[75739]: 2025-11-24 09:26:20.362198768 +0000 UTC m=+0.159647071 container attach 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Nov 24 04:26:20 np0005533252 adoring_ellis[75756]: --> passed data devices: 0 physical, 1 LVM
Nov 24 04:26:20 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:20 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:20 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d66edcc6-663b-43db-9331-33ccbb320884
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 04:26:21 np0005533252 lvm[75817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:26:21 np0005533252 lvm[75817]: VG ceph_vg0 finished
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: stderr: got monmap epoch 1
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: --> Creating keyring file for osd.1
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 24 04:26:21 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d66edcc6-663b-43db-9331-33ccbb320884 --setuser ceph --setgroup ceph
Nov 24 04:26:24 np0005533252 adoring_ellis[75756]: stderr: 2025-11-24T09:26:21.902+0000 7f08c6f48740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Nov 24 04:26:24 np0005533252 adoring_ellis[75756]: stderr: 2025-11-24T09:26:22.175+0000 7f08c6f48740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 24 04:26:24 np0005533252 adoring_ellis[75756]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 04:26:24 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:24 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 04:26:25 np0005533252 adoring_ellis[75756]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 04:26:25 np0005533252 systemd[1]: libpod-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Deactivated successfully.
Nov 24 04:26:25 np0005533252 podman[75739]: 2025-11-24 09:26:25.110048768 +0000 UTC m=+4.907497061 container died 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 04:26:25 np0005533252 systemd[1]: libpod-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Consumed 2.026s CPU time.
Nov 24 04:26:25 np0005533252 systemd[1]: var-lib-containers-storage-overlay-f2ebb3cceb4bf91794d4f6df9aa4625ea9c84d0826f4737606000ffe44b5e21d-merged.mount: Deactivated successfully.
Nov 24 04:26:25 np0005533252 podman[75739]: 2025-11-24 09:26:25.168584198 +0000 UTC m=+4.966032491 container remove 0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:26:25 np0005533252 systemd[1]: libpod-conmon-0885db390a813aa887e3f326c869d1778c785c619223ce958e44394f5d02bf7d.scope: Deactivated successfully.
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.760539817 +0000 UTC m=+0.032533848 container create 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:25 np0005533252 systemd[1]: Started libpod-conmon-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope.
Nov 24 04:26:25 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.84072928 +0000 UTC m=+0.112723331 container init 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.746461643 +0000 UTC m=+0.018455714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.852912037 +0000 UTC m=+0.124906078 container start 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.855965703 +0000 UTC m=+0.127959774 container attach 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:25 np0005533252 practical_brattain[76855]: 167 167
Nov 24 04:26:25 np0005533252 systemd[1]: libpod-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope: Deactivated successfully.
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.861181615 +0000 UTC m=+0.133175656 container died 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 04:26:25 np0005533252 systemd[1]: var-lib-containers-storage-overlay-4fb3d7dac866a2dba0e4a64b776bfd371d7924c57ac7b3e8f7bed9f153e797ec-merged.mount: Deactivated successfully.
Nov 24 04:26:25 np0005533252 podman[76839]: 2025-11-24 09:26:25.896528782 +0000 UTC m=+0.168522823 container remove 974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 04:26:25 np0005533252 systemd[1]: libpod-conmon-974d887d8ec63bb3f576a65c45805ee3c08c1a81617b456bcb3f3433780984e1.scope: Deactivated successfully.
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.103089261 +0000 UTC m=+0.049551326 container create a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 04:26:26 np0005533252 systemd[1]: Started libpod-conmon-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope.
Nov 24 04:26:26 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:26 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:26 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:26 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:26 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.085215622 +0000 UTC m=+0.031677727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.184977887 +0000 UTC m=+0.131439972 container init a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.192745823 +0000 UTC m=+0.139207888 container start a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.195388219 +0000 UTC m=+0.141850284 container attach a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]: {
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:    "1": [
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:        {
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "devices": [
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "/dev/loop3"
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            ],
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "lv_name": "ceph_lv0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "lv_size": "21470642176",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d66edcc6-663b-43db-9331-33ccbb320884,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "lv_uuid": "hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "name": "ceph_lv0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "tags": {
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.block_uuid": "hKqi78-1PuH-NO5r-o51i-OjPb-2kPE-hGsfdb",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.cephx_lockbox_secret": "",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.cluster_name": "ceph",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.crush_device_class": "",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.encrypted": "0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.osd_fsid": "d66edcc6-663b-43db-9331-33ccbb320884",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.osd_id": "1",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.type": "block",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.vdo": "0",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:                "ceph.with_tpm": "0"
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            },
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "type": "block",
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:            "vg_name": "ceph_vg0"
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:        }
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]:    ]
Nov 24 04:26:26 np0005533252 lucid_faraday[76893]: }
Nov 24 04:26:26 np0005533252 systemd[1]: libpod-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope: Deactivated successfully.
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.525325605 +0000 UTC m=+0.471787680 container died a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:26 np0005533252 systemd[1]: var-lib-containers-storage-overlay-7c0586c537b675c3a9c16f2ec338bd1a9e9fb6ae1b9c225cac19b32026ee284d-merged.mount: Deactivated successfully.
Nov 24 04:26:26 np0005533252 podman[76877]: 2025-11-24 09:26:26.569248169 +0000 UTC m=+0.515710234 container remove a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 04:26:26 np0005533252 systemd[1]: libpod-conmon-a9a1f0222160b2ea04a5229a190df843b7554ad8f183807ee1589b169e00c824.scope: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.195638612 +0000 UTC m=+0.055391633 container create 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 04:26:27 np0005533252 systemd[1]: Started libpod-conmon-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope.
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.158751346 +0000 UTC m=+0.018504387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:27 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.290574177 +0000 UTC m=+0.150327218 container init 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.299999724 +0000 UTC m=+0.159752755 container start 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 04:26:27 np0005533252 nifty_stonebraker[77019]: 167 167
Nov 24 04:26:27 np0005533252 systemd[1]: libpod-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.304528687 +0000 UTC m=+0.164281708 container attach 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.304882386 +0000 UTC m=+0.164635407 container died 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:26:27 np0005533252 systemd[1]: var-lib-containers-storage-overlay-fdd98df389cd13536f680e84948561840a99ddb1a03038644df2707ad371a1b0-merged.mount: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77003]: 2025-11-24 09:26:27.334171691 +0000 UTC m=+0.193924722 container remove 303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 04:26:27 np0005533252 systemd[1]: libpod-conmon-303874641ec78c5d48a6478f637f8ddfad9ca6205fc4022fb462d019c147a869.scope: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.593176227 +0000 UTC m=+0.044935420 container create eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:26:27 np0005533252 systemd[1]: Started libpod-conmon-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope.
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.574391134 +0000 UTC m=+0.026150307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:27 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.699883057 +0000 UTC m=+0.151642230 container init eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.711480628 +0000 UTC m=+0.163239781 container start eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.717255804 +0000 UTC m=+0.169014977 container attach eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 04:26:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 24 04:26:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]:                            [--no-systemd] [--no-tmpfs]
Nov 24 04:26:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test[77065]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 04:26:27 np0005533252 systemd[1]: libpod-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.892783262 +0000 UTC m=+0.344542455 container died eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:27 np0005533252 systemd[1]: var-lib-containers-storage-overlay-902efd4e410547f0ca0ccb2ccf00893b983276228e51421bc6bf120c16741a8a-merged.mount: Deactivated successfully.
Nov 24 04:26:27 np0005533252 podman[77049]: 2025-11-24 09:26:27.94445899 +0000 UTC m=+0.396218153 container remove eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:26:27 np0005533252 systemd[1]: libpod-conmon-eff30742f34dfaf822d4b726018bbf4f9eacfc34f05767b771dde736c2f63c07.scope: Deactivated successfully.
Nov 24 04:26:28 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:28 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:28 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:28 np0005533252 systemd[1]: Reloading.
Nov 24 04:26:28 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:26:28 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:26:28 np0005533252 systemd[1]: Starting Ceph osd.1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:26:28 np0005533252 podman[77224]: 2025-11-24 09:26:28.979519987 +0000 UTC m=+0.038492147 container create 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 24 04:26:29 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:29 np0005533252 podman[77224]: 2025-11-24 09:26:29.054186232 +0000 UTC m=+0.113158402 container init 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 04:26:29 np0005533252 podman[77224]: 2025-11-24 09:26:28.961338831 +0000 UTC m=+0.020311021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:29 np0005533252 podman[77224]: 2025-11-24 09:26:29.062106992 +0000 UTC m=+0.121079152 container start 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Nov 24 04:26:29 np0005533252 podman[77224]: 2025-11-24 09:26:29.065104777 +0000 UTC m=+0.124076957 container attach 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 lvm[77319]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:26:29 np0005533252 lvm[77319]: VG ceph_vg0 finished
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 bash[77224]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 04:26:29 np0005533252 bash[77224]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 04:26:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 bash[77224]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 bash[77224]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 04:26:30 np0005533252 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 04:26:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:30 np0005533252 bash[77224]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 04:26:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate[77238]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 04:26:30 np0005533252 bash[77224]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 04:26:30 np0005533252 systemd[1]: libpod-37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f.scope: Deactivated successfully.
Nov 24 04:26:30 np0005533252 systemd[1]: libpod-37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f.scope: Consumed 1.267s CPU time.
Nov 24 04:26:30 np0005533252 podman[77224]: 2025-11-24 09:26:30.2691983 +0000 UTC m=+1.328170480 container died 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 04:26:30 np0005533252 systemd[1]: var-lib-containers-storage-overlay-5772f31678217f9ce719efcf80e6d8557ba541c46675bc0ad345094d391fdb2b-merged.mount: Deactivated successfully.
Nov 24 04:26:30 np0005533252 podman[77224]: 2025-11-24 09:26:30.318892888 +0000 UTC m=+1.377865048 container remove 37bf49d7886b6993125c7ee1fc06d5dcca821dd64db4bc80ab466d27f416722f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1-activate, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:26:30 np0005533252 podman[77477]: 2025-11-24 09:26:30.500633252 +0000 UTC m=+0.034901156 container create 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 04:26:30 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:30 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:30 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:30 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:30 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20364e1a80cc0f2dd656806b3ef56b9a4afe6fbb399c54acd1c2245f9bd2d8c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:30 np0005533252 podman[77477]: 2025-11-24 09:26:30.559267905 +0000 UTC m=+0.093535829 container init 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:26:30 np0005533252 podman[77477]: 2025-11-24 09:26:30.566097837 +0000 UTC m=+0.100365741 container start 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 04:26:30 np0005533252 bash[77477]: 074006852d3bf59eab891b01d7d44e37ff455737962785dcf590ba91b5bd182c
Nov 24 04:26:30 np0005533252 podman[77477]: 2025-11-24 09:26:30.485979645 +0000 UTC m=+0.020247569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:30 np0005533252 systemd[1]: Started Ceph osd.1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: pidfile_write: ignore empty --pid-file
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:30 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.0786234 +0000 UTC m=+0.041635696 container create 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:26:31 np0005533252 systemd[1]: Started libpod-conmon-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope.
Nov 24 04:26:31 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.061466679 +0000 UTC m=+0.024478995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.165233455 +0000 UTC m=+0.128245771 container init 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.173099893 +0000 UTC m=+0.136112189 container start 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.176955349 +0000 UTC m=+0.139967695 container attach 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:31 np0005533252 quizzical_engelbart[77620]: 167 167
Nov 24 04:26:31 np0005533252 systemd[1]: libpod-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope: Deactivated successfully.
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.180338295 +0000 UTC m=+0.143350591 container died 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 04:26:31 np0005533252 systemd[1]: var-lib-containers-storage-overlay-9f0692e4056a89d5cae3738aa49d0bc670841366a357395f6a463bee39170503-merged.mount: Deactivated successfully.
Nov 24 04:26:31 np0005533252 podman[77604]: 2025-11-24 09:26:31.212167695 +0000 UTC m=+0.175179991 container remove 8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_engelbart, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:31 np0005533252 systemd[1]: libpod-conmon-8d87c7fd53e68ed8de5e5727a2a14e8bcf403eee0225f7664f8e8891170e18bb.scope: Deactivated successfully.
Nov 24 04:26:31 np0005533252 podman[77644]: 2025-11-24 09:26:31.380036591 +0000 UTC m=+0.066357198 container create 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 04:26:31 np0005533252 systemd[1]: Started libpod-conmon-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope.
Nov 24 04:26:31 np0005533252 podman[77644]: 2025-11-24 09:26:31.341747019 +0000 UTC m=+0.028067696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:31 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:31 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:31 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:31 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:31 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:31 np0005533252 podman[77644]: 2025-11-24 09:26:31.485561371 +0000 UTC m=+0.171881948 container init 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 04:26:31 np0005533252 podman[77644]: 2025-11-24 09:26:31.495758827 +0000 UTC m=+0.182079394 container start 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:31 np0005533252 podman[77644]: 2025-11-24 09:26:31.499192064 +0000 UTC m=+0.185512651 container attach 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:31 np0005533252 ceph-osd[77497]: bdev(0x5634bb945800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: load: jerasure load: lrc 
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:32 np0005533252 lvm[77743]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:26:32 np0005533252 lvm[77743]: VG ceph_vg0 finished
Nov 24 04:26:32 np0005533252 interesting_lamarr[77660]: {}
Nov 24 04:26:32 np0005533252 systemd[1]: libpod-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Deactivated successfully.
Nov 24 04:26:32 np0005533252 systemd[1]: libpod-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Consumed 1.171s CPU time.
Nov 24 04:26:32 np0005533252 podman[77746]: 2025-11-24 09:26:32.278866036 +0000 UTC m=+0.026075495 container died 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 04:26:32 np0005533252 systemd[1]: var-lib-containers-storage-overlay-0a698d9f57c0dd27f68e60b82299f051b559413dc61c1e6ca557e67600eabca0-merged.mount: Deactivated successfully.
Nov 24 04:26:32 np0005533252 podman[77746]: 2025-11-24 09:26:32.31075819 +0000 UTC m=+0.057967619 container remove 17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 04:26:32 np0005533252 systemd[1]: libpod-conmon-17dfbbe4b6f75722be508611871aae43401626e6c06637de2a8125a332576de7.scope: Deactivated successfully.
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:32 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e0c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount shared_bdev_used = 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: RocksDB version: 7.9.2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Git sha 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DB SUMMARY
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DB Session ID:  02OCALY6G9ZWHVIYWD2O
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: CURRENT file:  CURRENT
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.error_if_exists: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.create_if_missing: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                     Options.env: 0x5634bc7b1dc0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                Options.info_log: 0x5634bc7b57a0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.statistics: (nil)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.use_fsync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.db_log_dir: 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.write_buffer_manager: 0x5634bc8aaa00
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.unordered_write: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.row_cache: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.wal_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.two_write_queues: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.wal_compression: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.atomic_flush: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_background_jobs: 4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_background_compactions: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_subcompactions: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.max_open_files: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Compression algorithms supported:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZSTD supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kXpressCompression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZlibCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb65118b-1a2a-4ee6-a16f-d5932cc21adb
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393161976, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393162192, "job": 1, "event": "recovery_finished"}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: freelist init
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: freelist _read_cfg
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs umount
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bdev(0x5634bc7e1000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluefs mount shared_bdev_used = 4718592
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: RocksDB version: 7.9.2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Git sha 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DB SUMMARY
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DB Session ID:  02OCALY6G9ZWHVIYWD2P
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: CURRENT file:  CURRENT
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.error_if_exists: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.create_if_missing: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                     Options.env: 0x5634bc94e310
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                Options.info_log: 0x5634bc7b5b20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.statistics: (nil)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.use_fsync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.db_log_dir: 
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.write_buffer_manager: 0x5634bc8aaa00
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.unordered_write: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.row_cache: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                              Options.wal_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.two_write_queues: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.wal_compression: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.atomic_flush: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_background_jobs: 4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_background_compactions: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_subcompactions: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.max_open_files: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Compression algorithms supported:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZSTD supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kXpressCompression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kZlibCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9db350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:           Options.merge_operator: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5634bc7b5ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5634bb9da9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.compression: LZ4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.num_levels: 7
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bb65118b-1a2a-4ee6-a16f-d5932cc21adb
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393442615, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393449012, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393451680, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393453849, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976393, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bb65118b-1a2a-4ee6-a16f-d5932cc21adb", "db_session_id": "02OCALY6G9ZWHVIYWD2P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393455263, "job": 1, "event": "recovery_finished"}
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5634bc9b2000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: DB pointer 0x5634bc95c000
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 460.80 MB usage: 0
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: _get_class not permitted to load lua
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: _get_class not permitted to load sdk
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 load_pgs
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 load_pgs opened 0 pgs
Nov 24 04:26:33 np0005533252 ceph-osd[77497]: osd.1 0 log_to_monitors true
Nov 24 04:26:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:33.478+0000 7f344d029740 -1 osd.1 0 log_to_monitors true
Nov 24 04:26:33 np0005533252 podman[78331]: 2025-11-24 09:26:33.888366658 +0000 UTC m=+0.054157862 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 04:26:33 np0005533252 podman[78331]: 2025-11-24 09:26:33.999820374 +0000 UTC m=+0.165611578 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:26:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 04:26:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.672164834 +0000 UTC m=+0.040493242 container create c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 04:26:34 np0005533252 systemd[1]: Started libpod-conmon-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope.
Nov 24 04:26:34 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.654012192 +0000 UTC m=+0.022340630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.7636407 +0000 UTC m=+0.131969158 container init c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.771694825 +0000 UTC m=+0.140023233 container start c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.775337119 +0000 UTC m=+0.143665547 container attach c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:26:34 np0005533252 gracious_kare[78486]: 167 167
Nov 24 04:26:34 np0005533252 systemd[1]: libpod-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope: Deactivated successfully.
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.778491575 +0000 UTC m=+0.146819983 container died c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 04:26:34 np0005533252 systemd[1]: var-lib-containers-storage-overlay-d72de2e3386eb9030aa324f237758c4afe4b791f970f0aefa030a2fc25dee66d-merged.mount: Deactivated successfully.
Nov 24 04:26:34 np0005533252 podman[78470]: 2025-11-24 09:26:34.810818538 +0000 UTC m=+0.179146946 container remove c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 04:26:34 np0005533252 systemd[1]: libpod-conmon-c15201c72d9e209f06e336496e0d7bd82929f981090144ce3a0bdefd0098aed9.scope: Deactivated successfully.
Nov 24 04:26:34 np0005533252 podman[78511]: 2025-11-24 09:26:34.986731945 +0000 UTC m=+0.062759477 container create 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 04:26:35 np0005533252 systemd[1]: Started libpod-conmon-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope.
Nov 24 04:26:35 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:26:35 np0005533252 podman[78511]: 2025-11-24 09:26:34.952865181 +0000 UTC m=+0.028892783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:26:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:26:35 np0005533252 podman[78511]: 2025-11-24 09:26:35.059897417 +0000 UTC m=+0.135924949 container init 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 04:26:35 np0005533252 podman[78511]: 2025-11-24 09:26:35.068190987 +0000 UTC m=+0.144218489 container start 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 04:26:35 np0005533252 podman[78511]: 2025-11-24 09:26:35.071238519 +0000 UTC m=+0.147266041 container attach 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 done with init, starting boot process
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 start_boot
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 04:26:35 np0005533252 ceph-osd[77497]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]: [
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:    {
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "available": false,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "being_replaced": false,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "ceph_device_lvm": false,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "lsm_data": {},
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "lvs": [],
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "path": "/dev/sr0",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "rejected_reasons": [
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "Has a FileSystem",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "Insufficient space (<5GB)"
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        ],
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        "sys_api": {
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "actuators": null,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "device_nodes": [
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:                "sr0"
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            ],
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "devname": "sr0",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "human_readable_size": "482.00 KB",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "id_bus": "ata",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "model": "QEMU DVD-ROM",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "nr_requests": "2",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "parent": "/dev/sr0",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "partitions": {},
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "path": "/dev/sr0",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "removable": "1",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "rev": "2.5+",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "ro": "0",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "rotational": "1",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "sas_address": "",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "sas_device_handle": "",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "scheduler_mode": "mq-deadline",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "sectors": 0,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "sectorsize": "2048",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "size": 493568.0,
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "support_discard": "2048",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "type": "disk",
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:            "vendor": "QEMU"
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:        }
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]:    }
Nov 24 04:26:35 np0005533252 naughty_galileo[78527]: ]
Nov 24 04:26:35 np0005533252 systemd[1]: libpod-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope: Deactivated successfully.
Nov 24 04:26:35 np0005533252 podman[79665]: 2025-11-24 09:26:35.764668591 +0000 UTC m=+0.020755127 container died 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 04:26:37 np0005533252 systemd[1]: var-lib-containers-storage-overlay-7a1bf355fd80ae25b5251dda1c4c8fac9704308a181e82422f4d636960975f58-merged.mount: Deactivated successfully.
Nov 24 04:26:37 np0005533252 podman[79665]: 2025-11-24 09:26:37.544896145 +0000 UTC m=+1.800982651 container remove 0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 04:26:37 np0005533252 systemd[1]: libpod-conmon-0202949abe1f024b558f56c18741e5987044a2f7ee566683a619f49212517032.scope: Deactivated successfully.
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 42.922 iops: 10987.995 elapsed_sec: 0.273
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [WRN] : OSD bench result of 10987.994700 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 0 waiting for initial osdmap
Nov 24 04:26:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:39.418+0000 7f34497bf640 -1 osd.1 0 waiting for initial osdmap
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 set_numa_affinity not setting numa affinity
Nov 24 04:26:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-1[77493]: 2025-11-24T09:26:39.437+0000 7f34445d4640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 24 04:26:39 np0005533252 ceph-osd[77497]: osd.1 8 state: booting -> active
Nov 24 04:26:41 np0005533252 ceph-osd[77497]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 04:26:41 np0005533252 ceph-osd[77497]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 04:26:41 np0005533252 ceph-osd[77497]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 04:26:41 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:26:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [1] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.475929256 +0000 UTC m=+0.039394180 container create 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 04:27:01 np0005533252 systemd[1]: Started libpod-conmon-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope.
Nov 24 04:27:01 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.548264809 +0000 UTC m=+0.111729753 container init 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.459082069 +0000 UTC m=+0.022547013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.55853958 +0000 UTC m=+0.122004514 container start 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.562198985 +0000 UTC m=+0.125663929 container attach 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 04:27:01 np0005533252 silly_lalande[79792]: 167 167
Nov 24 04:27:01 np0005533252 systemd[1]: libpod-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope: Deactivated successfully.
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.564945321 +0000 UTC m=+0.128410245 container died 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:27:01 np0005533252 systemd[1]: var-lib-containers-storage-overlay-206c5dbe4a6485aef50f2e9a8ec132438040c32b4ffa6d2b7e571822d20a5a17-merged.mount: Deactivated successfully.
Nov 24 04:27:01 np0005533252 podman[79776]: 2025-11-24 09:27:01.602636814 +0000 UTC m=+0.166101748 container remove 5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 04:27:01 np0005533252 systemd[1]: libpod-conmon-5744dab53c48ab3af9b7949ad0ace9e88723809e6401c7cee90eefa6175b131c.scope: Deactivated successfully.
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.659580762 +0000 UTC m=+0.035589041 container create 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 24 04:27:01 np0005533252 systemd[1]: Started libpod-conmon-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope.
Nov 24 04:27:01 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:27:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.722794499 +0000 UTC m=+0.098802808 container init 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.729967876 +0000 UTC m=+0.105976165 container start 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.733729363 +0000 UTC m=+0.109737652 container attach 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.644146846 +0000 UTC m=+0.020155165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:01 np0005533252 systemd[1]: libpod-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope: Deactivated successfully.
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.823332521 +0000 UTC m=+0.199340810 container died 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:27:01 np0005533252 systemd[1]: var-lib-containers-storage-overlay-2cc85eae6c9c36102543a7b92197d663c38fa07e6cee76e89f26ee58fb1b318f-merged.mount: Deactivated successfully.
Nov 24 04:27:01 np0005533252 podman[79809]: 2025-11-24 09:27:01.855617053 +0000 UTC m=+0.231625342 container remove 9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 04:27:01 np0005533252 systemd[1]: libpod-conmon-9e3cdf3f1233b36ce52f369bd04b45144388da47e5f0e6bb8df055e1c77f4b29.scope: Deactivated successfully.
Nov 24 04:27:01 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:01 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:01 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:02 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:02 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:02 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:02 np0005533252 systemd[1]: Starting Ceph mon.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:27:02 np0005533252 podman[79989]: 2025-11-24 09:27:02.653197671 +0000 UTC m=+0.040442061 container create 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:27:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edee9d0e12a37ecdff4e90bc62e2300ec56e12e4365cee9a2158103da2588584/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:02 np0005533252 podman[79989]: 2025-11-24 09:27:02.703677816 +0000 UTC m=+0.090922206 container init 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:27:02 np0005533252 podman[79989]: 2025-11-24 09:27:02.713823005 +0000 UTC m=+0.101067385 container start 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:27:02 np0005533252 bash[79989]: 515e62465fc9d8059b784860ec72fb021260af5e219f16728bf6dad72e3c38a3
Nov 24 04:27:02 np0005533252 podman[79989]: 2025-11-24 09:27:02.633552128 +0000 UTC m=+0.020796548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:02 np0005533252 systemd[1]: Started Ceph mon.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: pidfile_write: ignore empty --pid-file
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: load: jerasure load: lrc 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: RocksDB version: 7.9.2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Git sha 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: DB SUMMARY
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: DB Session ID:  IKBI0BILOO7CZC90TSBP
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: CURRENT file:  CURRENT
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                         Options.error_if_exists: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.create_if_missing: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                                     Options.env: 0x55a5fdc67c20
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                                Options.info_log: 0x55a5fe7d1a20
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                              Options.statistics: (nil)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                               Options.use_fsync: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                              Options.db_log_dir: 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                                 Options.wal_dir: 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                    Options.write_buffer_manager: 0x55a5fe7d5900
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.unordered_write: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                               Options.row_cache: None
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                              Options.wal_filter: None
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.two_write_queues: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.wal_compression: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.atomic_flush: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.max_background_jobs: 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.max_background_compactions: -1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.max_subcompactions: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                          Options.max_open_files: -1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Compression algorithms supported:
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kZSTD supported: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kXpressCompression supported: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kZlibCompression supported: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:           Options.merge_operator: 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:        Options.compaction_filter: None
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a5fe7d05c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a5fe7f5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.compression: NoCompression
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.num_levels: 7
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                           Options.bloom_locality: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                               Options.ttl: 2592000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                       Options.enable_blob_files: false
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                           Options.min_blob_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 299c38d0-06ca-4074-b462-97cee3c14bc3
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422763515, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422765338, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976422765506, "job": 1, "event": "recovery_finished"}
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a5fe7f6e00
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: DB pointer 0x55a5fe900000
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(???) e0 preinit fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-11-24T09:25:05:540478+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 2 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Deploying daemon crash.compute-1 on compute-1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Deploying daemon osd.1 on compute-1
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Deploying daemon osd.0 on compute-0
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: OSD bench result of 6266.692144 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: OSD bench result of 10987.994700 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864] boot
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744] boot
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Deploying daemon mon.compute-2 on compute-2
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: Cluster is now healthy
Nov 24 04:27:02 np0005533252 ceph-mon[80009]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 24 04:27:08 np0005533252 ceph-mon[80009]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 24 04:27:08 np0005533252 ceph-mon[80009]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 24 04:27:08 np0005533252 ceph-mon[80009]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 04:27:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: Deploying daemon mon.compute-1 on compute-1
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-0 calling monitor election
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-2 calling monitor election
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: overall HEALTH_OK
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 04:27:11 np0005533252 ceph-mon[80009]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: mon.compute-0 calling monitor election
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: mon.compute-2 calling monitor election
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: overall HEALTH_OK
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.528376104 +0000 UTC m=+0.038313897 container create 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:27:12 np0005533252 systemd[1]: Started libpod-conmon-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope.
Nov 24 04:27:12 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.512006609 +0000 UTC m=+0.021944432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.612658613 +0000 UTC m=+0.122596426 container init 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.620453612 +0000 UTC m=+0.130391405 container start 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.62423625 +0000 UTC m=+0.134174063 container attach 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 04:27:12 np0005533252 jolly_kowalevski[80155]: 167 167
Nov 24 04:27:12 np0005533252 systemd[1]: libpod-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope: Deactivated successfully.
Nov 24 04:27:12 np0005533252 conmon[80155]: conmon 010ed44705477b83144e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope/container/memory.events
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.62812406 +0000 UTC m=+0.138061853 container died 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 04:27:12 np0005533252 systemd[1]: var-lib-containers-storage-overlay-8cc21078caa96df806b77e00d44a024a512f2dd237cbc6376b65da37bc05b102-merged.mount: Deactivated successfully.
Nov 24 04:27:12 np0005533252 podman[80139]: 2025-11-24 09:27:12.660068425 +0000 UTC m=+0.170006218 container remove 010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:27:12 np0005533252 systemd[1]: libpod-conmon-010ed44705477b83144efd2e84cc3f71cd000fb5f5367095e49eee536da639aa.scope: Deactivated successfully.
Nov 24 04:27:12 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:12 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:12 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e12 _set_new_cache_sizes cache_size:1019937309 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:12 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: mon.compute-1 calling monitor election
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: Deploying daemon mgr.compute-1.qelqsg on compute-1
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e13 e13: 2 total, 2 up, 2 in
Nov 24 04:27:13 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:13 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:13 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:13 np0005533252 systemd[1]: Starting Ceph mgr.compute-1.qelqsg for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:27:13 np0005533252 podman[80296]: 2025-11-24 09:27:13.451796484 +0000 UTC m=+0.036666603 container create 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:27:13 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:13 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:13 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:13 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb76d903953ac3e50f42bf090ea3925d2c405d2f3059059f14bd330c2eb4597/merged/var/lib/ceph/mgr/ceph-compute-1.qelqsg supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:13 np0005533252 podman[80296]: 2025-11-24 09:27:13.508463335 +0000 UTC m=+0.093333484 container init 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 04:27:13 np0005533252 podman[80296]: 2025-11-24 09:27:13.516694004 +0000 UTC m=+0.101564123 container start 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:27:13 np0005533252 bash[80296]: 060bf9a9568d5a1517ba559812a28ddc141c219d3df64d1d6697255909caeb65
Nov 24 04:27:13 np0005533252 podman[80296]: 2025-11-24 09:27:13.436061521 +0000 UTC m=+0.020931640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:13 np0005533252 systemd[1]: Started Ceph mgr.compute-1.qelqsg for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 04:27:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:13.676+0000 7f74b3087140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:13.756+0000 7f74b3087140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:13 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e14 e14: 2 total, 2 up, 2 in
Nov 24 04:27:14 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:14 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 04:27:14 np0005533252 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:14.567+0000 7f74b3087140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:14 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 04:27:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Nov 24 04:27:15 np0005533252 ceph-mon[80009]: Deploying daemon crash.compute-2 on compute-2
Nov 24 04:27:15 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:15 np0005533252 ceph-mon[80009]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 04:27:15 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.223+0000 7f74b3087140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:  from numpy import show_config as show_numpy_config
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.405+0000 7f74b3087140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.480+0000 7f74b3087140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:15.640+0000 7f74b3087140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:15 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:16 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:27:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:16.713+0000 7f74b3087140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:16.955+0000 7f74b3087140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:16 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.034+0000 7f74b3087140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.108+0000 7f74b3087140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.194+0000 7f74b3087140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.271+0000 7f74b3087140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.643+0000 7f74b3087140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:17.748+0000 7f74b3087140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 04:27:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e18 _set_new_cache_sizes cache_size:1020053330 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:17 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 04:27:18 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 04:27:18 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/1514770584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 04:27:18 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]': finished
Nov 24 04:27:18 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 04:27:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.206+0000 7f74b3087140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e19 e19: 3 total, 2 up, 3 in
Nov 24 04:27:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 19 pg[7.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.794+0000 7f74b3087140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.865+0000 7f74b3087140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:18.944+0000 7f74b3087140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:27:18 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.096+0000 7f74b3087140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.172+0000 7f74b3087140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.340+0000 7f74b3087140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.578+0000 7f74b3087140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 04:27:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Nov 24 04:27:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.872+0000 7f74b3087140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:19.944+0000 7f74b3087140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:27:19 np0005533252 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x5637c5566d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 04:27:20 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 04:27:20 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 04:27:20 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:20 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Nov 24 04:27:21 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 04:27:21 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 04:27:22 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 04:27:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Nov 24 04:27:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e22 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:23 np0005533252 ceph-mon[80009]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 04:27:23 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 04:27:23 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 04:27:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Nov 24 04:27:24 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 04:27:24 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 04:27:24 np0005533252 ceph-mon[80009]: Deploying daemon osd.2 on compute-2
Nov 24 04:27:24 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 04:27:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Nov 24 04:27:25 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 04:27:25 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 04:27:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Nov 24 04:27:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Nov 24 04:27:26 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 04:27:26 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: Cluster is now healthy
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Nov 24 04:27:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Nov 24 04:27:28 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28 pruub=9.575661659s) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active pruub 64.608634949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:28 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 28 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28 pruub=9.575661659s) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown pruub 64.608634949s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 04:27:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=13/14 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.8( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.2( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=28/29 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.11( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.14( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.16( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.1a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 29 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=13/13 les/c/f=14/14/0 sis=28) [1] r=0 lpr=28 pi=[13,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Nov 24 04:27:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3323422740' entity='client.admin' 
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 04:27:30 np0005533252 ceph-mon[80009]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 04:27:30 np0005533252 podman[80494]: 2025-11-24 09:27:30.939552303 +0000 UTC m=+0.052583730 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 04:27:31 np0005533252 podman[80494]: 2025-11-24 09:27:31.034805037 +0000 UTC m=+0.147836484 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Nov 24 04:27:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 24 04:27:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: Saving service ingress.rgw.default spec with placement count:2
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:27:31 np0005533252 ceph-mon[80009]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 24 04:27:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 24 04:27:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Nov 24 04:27:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 04:27:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: Saving service node-exporter spec with placement *
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: Saving service grafana spec with placement compute-0;count:1
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: Saving service prometheus spec with placement compute-0;count:1
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: Saving service alertmanager spec with placement compute-0;count:1
Nov 24 04:27:33 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 32 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=9.349663734s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active pruub 70.276336670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32 pruub=9.349663734s) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown pruub 70.276336670s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.19( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.7( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.d( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.14( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.17( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.12( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.16( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=19/20 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 24 04:27:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/32884121' entity='client.admin' 
Nov 24 04:27:34 np0005533252 ceph-mon[80009]: OSD bench result of 8908.221181 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 04:27:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=32/34 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=19/19 les/c/f=20/20/0 sis=32) [1] r=0 lpr=32 pi=[19,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 24 04:27:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/4251225502' entity='client.admin' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159] boot
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/390020590' entity='client.admin' 
Nov 24 04:27:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 04:27:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 24 04:27:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 24 04:27:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Nov 24 04:27:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Nov 24 04:27:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:38 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3786425625' entity='client.admin' 
Nov 24 04:27:38 np0005533252 python3[81135]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:27:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 04:27:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 04:27:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 24 04:27:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/171270571' entity='client.admin' 
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:27:40 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 04:27:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 04:27:41 np0005533252 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-2.qecnjt on compute-2
Nov 24 04:27:41 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/940045969' entity='client.admin' 
Nov 24 04:27:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 24 04:27:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.025905609s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.679153442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.025840759s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.679153442s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.412052155s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065383911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028734207s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682113647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411995888s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065383911s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411870956s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065322876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028659821s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682113647s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028512001s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682136536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028491974s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682136536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411849976s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065322876s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028460503s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682136536s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411406517s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065132141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411390305s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065132141s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028267860s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682121277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028317451s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682189941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028250694s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682121277s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028303146s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682189941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411211967s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065055847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028315544s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682136536s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411434174s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.065414429s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411421776s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065414429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.411125183s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.065055847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028162956s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682212830s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.028146744s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682212830s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410729408s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064872742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410682678s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410472870s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064659119s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027964592s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682228088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410597801s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064849854s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410598755s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064872742s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027945518s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682228088s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410350800s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064659119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027894020s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682312012s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410316467s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027781487s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682235718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.410297394s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064758301s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027878761s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682312012s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027765274s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682235718s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409935951s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064651489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409915924s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064651489s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409746170s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064506531s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027548790s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682327271s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409718513s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064506531s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027529716s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682327271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027469635s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682388306s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027459145s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682388306s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027327538s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682342529s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027307510s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682342529s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027279854s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682395935s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027262688s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682395935s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409214020s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064353943s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409195900s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064353943s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409289360s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064422607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027223587s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682434082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409217834s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064422607s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027206421s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682434082s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409045219s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064323425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409029961s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064323425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409012794s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064338684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408977509s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064338684s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027039528s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682449341s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027024269s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682441711s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027027130s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682449341s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.027005196s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682441711s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408799171s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064323425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408782005s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064323425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026929855s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682502747s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026864052s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682502747s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406463623s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406445503s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.062194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026896477s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682472229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408471107s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064254761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409080505s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064880371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026688576s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682472229s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408452988s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064254761s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026684761s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682548523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.409064293s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026658058s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682548523s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026719093s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 77.682632446s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=32/34 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36 pruub=9.026703835s) [0] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.682632446s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406167030s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.062194824s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408223152s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064270020s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.406149864s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.062194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408313751s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 80.064369202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408190727s) [2] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064270020s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=28/29 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=11.408292770s) [0] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.064369202s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1f( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.18( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.1b( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.e( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.1a( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.5( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.2( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.d( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1b( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.a( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.7( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.c( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.1c( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[4.13( empty local-lis/les=0/0 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.15( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[5.10( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 24 04:27:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.749632812 +0000 UTC m=+0.032456876 container create ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 04:27:42 np0005533252 systemd[1]: Started libpod-conmon-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope.
Nov 24 04:27:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:42 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.806779927 +0000 UTC m=+0.089603991 container init ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.812074969 +0000 UTC m=+0.094899033 container start ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:27:42 np0005533252 gracious_wozniak[81257]: 167 167
Nov 24 04:27:42 np0005533252 systemd[1]: libpod-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope: Deactivated successfully.
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.817161737 +0000 UTC m=+0.099985831 container attach ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.817605348 +0000 UTC m=+0.100429412 container died ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.736362938 +0000 UTC m=+0.019187032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:42 np0005533252 systemd[1]: var-lib-containers-storage-overlay-696053096f122822b1c36abec3ce2ecae1b3b12abb9ec2b0d95fda9587d992bb-merged.mount: Deactivated successfully.
Nov 24 04:27:42 np0005533252 podman[81240]: 2025-11-24 09:27:42.864191829 +0000 UTC m=+0.147015893 container remove ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wozniak, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:27:42 np0005533252 systemd[1]: libpod-conmon-ceecfc4dc914bea9dfd03020a6c8314655013b2a2829e07386b0b9759669f7a1.scope: Deactivated successfully.
Nov 24 04:27:42 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:42 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:42 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1f( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.10( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.15( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 24 04:27:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3262419163' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.15( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.a( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.8( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.7( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.2( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.5( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.3( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.f( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.2( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.7( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.d( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1c( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.e( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.19( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[5.18( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [1] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36) [1] r=0 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 37 pg[6.1a( empty local-lis/les=36/37 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36) [1] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:43 np0005533252 systemd[1]: Reloading.
Nov 24 04:27:43 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:27:43 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:27:43 np0005533252 systemd[1]: Starting Ceph rgw.rgw.compute-1.vproll for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 04:27:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 04:27:43 np0005533252 podman[81397]: 2025-11-24 09:27:43.635782511 +0000 UTC m=+0.034400165 container create 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:27:43 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:43 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:43 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:43 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d585a4ed64a9fc51f3e12cd6e5d9a195791a1f2e75e3898e650cfab54229f34e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.vproll supports timestamps until 2038 (0x7fffffff)
Nov 24 04:27:43 np0005533252 podman[81397]: 2025-11-24 09:27:43.693283435 +0000 UTC m=+0.091901099 container init 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:27:43 np0005533252 podman[81397]: 2025-11-24 09:27:43.698197478 +0000 UTC m=+0.096815132 container start 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:27:43 np0005533252 bash[81397]: 057f5f36976684027f826afccaa0c17f9c2dd2c811eb88cfaf92b11a885c1e56
Nov 24 04:27:43 np0005533252 podman[81397]: 2025-11-24 09:27:43.62061774 +0000 UTC m=+0.019235424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:27:43 np0005533252 systemd[1]: Started Ceph rgw.rgw.compute-1.vproll for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:27:43 np0005533252 radosgw[81417]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:27:43 np0005533252 radosgw[81417]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 24 04:27:43 np0005533252 radosgw[81417]: framework: beast
Nov 24 04:27:43 np0005533252 radosgw[81417]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 24 04:27:43 np0005533252 radosgw[81417]: init_numa not setting numa affinity
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-1.vproll on compute-1
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/3262419163' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr respawn  1: '-n'
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr respawn  2: 'mgr.compute-1.qelqsg'
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr respawn  3: '-f'
Nov 24 04:27:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 04:27:44 np0005533252 radosgw[81417]: rgw main: failed to create zonegroup with (17) File exists
Nov 24 04:27:44 np0005533252 systemd[1]: session-23.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-26.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 23 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd[1]: session-22.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 26 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 22 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd[1]: session-20.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-28.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-31.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 31 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd[1]: session-24.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 20 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 28 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 24 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd[1]: session-27.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-29.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-32.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd[1]: session-32.scope: Consumed 59.665s CPU time.
Nov 24 04:27:44 np0005533252 systemd[1]: session-25.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 23.
Nov 24 04:27:44 np0005533252 systemd[1]: session-30.scope: Deactivated successfully.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 27 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 29 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 32 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 30 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Session 25 logged out. Waiting for processes to exit.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 26.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 22.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 20.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 28.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 31.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 24.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 27.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 29.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 32.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 25.
Nov 24 04:27:44 np0005533252 systemd-logind[823]: Removed session 30.
Nov 24 04:27:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 04:27:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 04:27:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:44.421+0000 7fc2d4543140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 04:27:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:44.501+0000 7fc2d4543140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 04:27:44 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 24 04:27:44 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: Deploying daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 24 04:27:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 04:27:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:45.319+0000 7fc2d4543140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 04:27:45 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 24 04:27:45 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 04:27:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:45.972+0000 7fc2d4543140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:45 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:  from numpy import show_config as show_numpy_config
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.137+0000 7fc2d4543140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 04:27:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.206+0000 7fc2d4543140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 04:27:46 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 04:27:46 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 04:27:46 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 04:27:46 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 04:27:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:46.344+0000 7fc2d4543140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 04:27:46 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 24 04:27:46 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 04:27:46 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.341+0000 7fc2d4543140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 04:27:47 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 24 04:27:47 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 41 pg[10.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:27:47 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.561+0000 7fc2d4543140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.639+0000 7fc2d4543140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.706+0000 7fc2d4543140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 04:27:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.788+0000 7fc2d4543140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 04:27:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:47.861+0000 7fc2d4543140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:27:47 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 04:27:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.208+0000 7fc2d4543140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 04:27:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 04:27:48 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 42 pg[10.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:27:48 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 04:27:48 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 04:27:48 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 04:27:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.306+0000 7fc2d4543140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 04:27:48 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 24 04:27:48 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 04:27:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:48.744+0000 7fc2d4543140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:27:48 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.304+0000 7fc2d4543140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.378+0000 7fc2d4543140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.459+0000 7fc2d4543140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 04:27:49 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 04:27:49 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.606+0000 7fc2d4543140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.678+0000 7fc2d4543140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 04:27:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:49.837+0000 7fc2d4543140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:27:49 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 04:27:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.061+0000 7fc2d4543140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 04:27:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.326+0000 7fc2d4543140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 04:27:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:50.400+0000 7fc2d4543140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x55d8b984f860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 04:27:50 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 04:27:50 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 04:27:50 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 04:27:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 04:27:50 np0005533252 radosgw[81417]: v1 topic migration: starting v1 topic migration..
Nov 24 04:27:50 np0005533252 radosgw[81417]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 04:27:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-1-vproll[81413]: 2025-11-24T09:27:50.893+0000 7faaa9bf1980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 04:27:50 np0005533252 radosgw[81417]: v1 topic migration: finished v1 topic migration
Nov 24 04:27:50 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 24 04:27:50 np0005533252 radosgw[81417]: framework: beast
Nov 24 04:27:50 np0005533252 radosgw[81417]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 04:27:50 np0005533252 radosgw[81417]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 04:27:50 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 24 04:27:50 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 24 04:27:50 np0005533252 radosgw[81417]: starting handler: beast
Nov 24 04:27:50 np0005533252 radosgw[81417]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:27:50 np0005533252 radosgw[81417]: mgrc service_daemon_register rgw.24191 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.vproll,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0565e2b2-234e-414b-b909-932048ceb050,zone_name=default,zonegroup_id=5f03f326-32a0-4275-804c-1875d841eeca,zonegroup_name=default}
Nov 24 04:27:50 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 24 04:27:50 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 24 04:27:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 24 04:27:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 24 04:27:51 np0005533252 systemd-logind[823]: New session 33 of user ceph-admin.
Nov 24 04:27:51 np0005533252 systemd[1]: Started Session 33 of User ceph-admin.
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 04:27:51 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 04:27:51 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 04:27:51 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 04:27:51 np0005533252 podman[82208]: 2025-11-24 09:27:51.88508466 +0000 UTC m=+0.056424058 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 04:27:51 np0005533252 podman[82208]: 2025-11-24 09:27:51.983170183 +0000 UTC m=+0.154509571 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:27:52 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:52 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 24 04:27:52 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 24 04:27:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Bus STARTING
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Serving on http://192.168.122.100:8765
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: Cluster is now healthy
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:53 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 24 04:27:53 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Serving on https://192.168.122.100:7150
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Bus STARTED
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:27:52] ENGINE Client ('192.168.122.100', 44150) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 04:27:54 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 04:27:54 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 24 04:27:54 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 24 04:27:55 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:55 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:55 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:55 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:27:55 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 24 04:27:55 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:56 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:56 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Nov 24 04:27:56 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  1: '-n'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  2: 'mgr.compute-1.qelqsg'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  3: '-f'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  4: '--setuser'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  5: 'ceph'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  6: '--setgroup'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  7: 'ceph'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  9: '--default-log-to-journald=true'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr respawn  exe_path /proc/self/exe
Nov 24 04:27:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 04:27:57 np0005533252 systemd[1]: session-33.scope: Deactivated successfully.
Nov 24 04:27:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 04:27:57 np0005533252 systemd[1]: session-33.scope: Consumed 4.232s CPU time.
Nov 24 04:27:57 np0005533252 systemd-logind[823]: Session 33 logged out. Waiting for processes to exit.
Nov 24 04:27:57 np0005533252 systemd-logind[823]: Removed session 33.
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 04:27:57 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 24 04:27:57 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 04:27:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:57.697+0000 7fe515335140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:57.782+0000 7fe515335140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:27:57 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 04:27:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:27:58 np0005533252 ceph-mon[80009]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 04:27:58 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 04:27:58 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 04:27:58 np0005533252 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:58.575+0000 7fe515335140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:27:58 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 04:27:58 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 24 04:27:58 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.210+0000 7fe515335140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:  from numpy import show_config as show_numpy_config
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.372+0000 7fe515335140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 04:27:59 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.449+0000 7fe515335140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 04:27:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:27:59.591+0000 7fe515335140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:27:59 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 24 04:27:59 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 24 04:27:59 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 04:28:00 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 04:28:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.602+0000 7fe515335140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 04:28:00 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.833+0000 7fe515335140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 04:28:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.908+0000 7fe515335140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:00.983+0000 7fe515335140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:28:00 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 04:28:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.066+0000 7fe515335140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 04:28:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.142+0000 7fe515335140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.497+0000 7fe515335140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 04:28:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:01.589+0000 7fe515335140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 04:28:01 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 24 04:28:01 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 24 04:28:01 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.014+0000 7fe515335140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.589+0000 7fe515335140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.660+0000 7fe515335140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.739+0000 7fe515335140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 04:28:02 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 24 04:28:02 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 24 04:28:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.888+0000 7fe515335140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 04:28:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:02.955+0000 7fe515335140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:28:02 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.108+0000 7fe515335140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.335+0000 7fe515335140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.612+0000 7fe515335140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.684+0000 7fe515335140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x56539b5e3860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 04:28:03 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 24 04:28:03 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 04:28:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.903+0000 7f9a8ed44140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 04:28:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:03.987+0000 7f9a8ed44140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:28:03 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 04:28:04 np0005533252 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 04:28:04 np0005533252 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 04:28:04 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 04:28:04 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 24 04:28:04 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 24 04:28:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:04.848+0000 7f9a8ed44140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:28:04 np0005533252 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:28:04 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.489+0000 7f9a8ed44140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:  from numpy import show_config as show_numpy_config
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.658+0000 7f9a8ed44140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.733+0000 7f9a8ed44140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 04:28:05 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 04:28:05 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 04:28:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:05.872+0000 7f9a8ed44140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:28:05 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 04:28:06 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 04:28:06 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 04:28:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:06.865+0000 7f9a8ed44140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:28:06 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.076+0000 7f9a8ed44140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.155+0000 7f9a8ed44140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.218+0000 7f9a8ed44140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.293+0000 7f9a8ed44140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.363+0000 7f9a8ed44140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.716+0000 7f9a8ed44140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 04:28:07 np0005533252 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 04:28:07 np0005533252 systemd[72465]: Activating special unit Exit the Session...
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped target Main User Target.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped target Basic System.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped target Paths.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped target Sockets.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped target Timers.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 04:28:07 np0005533252 systemd[72465]: Closed D-Bus User Message Bus Socket.
Nov 24 04:28:07 np0005533252 systemd[72465]: Stopped Create User's Volatile Files and Directories.
Nov 24 04:28:07 np0005533252 systemd[72465]: Removed slice User Application Slice.
Nov 24 04:28:07 np0005533252 systemd[72465]: Reached target Shutdown.
Nov 24 04:28:07 np0005533252 systemd[72465]: Finished Exit the Session.
Nov 24 04:28:07 np0005533252 systemd[72465]: Reached target Exit the Session.
Nov 24 04:28:07 np0005533252 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 04:28:07 np0005533252 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 04:28:07 np0005533252 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 04:28:07 np0005533252 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 04:28:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:07.819+0000 7f9a8ed44140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:28:07 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 04:28:07 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 24 04:28:07 np0005533252 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 04:28:07 np0005533252 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 04:28:07 np0005533252 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 04:28:07 np0005533252 systemd[1]: user-42477.slice: Consumed 1min 5.149s CPU time.
Nov 24 04:28:07 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 04:28:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.264+0000 7f9a8ed44140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 04:28:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.824+0000 7f9a8ed44140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 04:28:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.897+0000 7f9a8ed44140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 04:28:08 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 24 04:28:08 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 24 04:28:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:08.988+0000 7f9a8ed44140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:28:08 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.129+0000 7f9a8ed44140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.201+0000 7f9a8ed44140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.356+0000 7f9a8ed44140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.578+0000 7f9a8ed44140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.835+0000 7f9a8ed44140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 04:28:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:28:09.905+0000 7f9a8ed44140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 04:28:09 np0005533252 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x55d9f1a15860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 04:28:09 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 04:28:09 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 04:28:10 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 04:28:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 04:28:10 np0005533252 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 04:28:10 np0005533252 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 04:28:10 np0005533252 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 04:28:10 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 04:28:10 np0005533252 systemd-logind[823]: New session 34 of user ceph-admin.
Nov 24 04:28:10 np0005533252 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 04:28:10 np0005533252 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 04:28:10 np0005533252 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 04:28:10 np0005533252 systemd[1]: Starting User Manager for UID 42477...
Nov 24 04:28:10 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 24 04:28:10 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 24 04:28:10 np0005533252 systemd[83435]: Queued start job for default target Main User Target.
Nov 24 04:28:10 np0005533252 systemd[83435]: Created slice User Application Slice.
Nov 24 04:28:10 np0005533252 systemd[83435]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 04:28:10 np0005533252 systemd[83435]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 04:28:10 np0005533252 systemd[83435]: Reached target Paths.
Nov 24 04:28:10 np0005533252 systemd[83435]: Reached target Timers.
Nov 24 04:28:10 np0005533252 systemd[83435]: Starting D-Bus User Message Bus Socket...
Nov 24 04:28:10 np0005533252 systemd[83435]: Starting Create User's Volatile Files and Directories...
Nov 24 04:28:11 np0005533252 systemd[83435]: Finished Create User's Volatile Files and Directories.
Nov 24 04:28:11 np0005533252 systemd[83435]: Listening on D-Bus User Message Bus Socket.
Nov 24 04:28:11 np0005533252 systemd[83435]: Reached target Sockets.
Nov 24 04:28:11 np0005533252 systemd[83435]: Reached target Basic System.
Nov 24 04:28:11 np0005533252 systemd[83435]: Reached target Main User Target.
Nov 24 04:28:11 np0005533252 systemd[83435]: Startup finished in 118ms.
Nov 24 04:28:11 np0005533252 systemd[1]: Started User Manager for UID 42477.
Nov 24 04:28:11 np0005533252 systemd[1]: Started Session 34 of User ceph-admin.
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e2 new map
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e2 print_map#012e2#012btime 2025-11-24T09:28:11:441297+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:11.441245+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 04:28:11 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:11 np0005533252 podman[83567]: 2025-11-24 09:28:11.702530455 +0000 UTC m=+0.057333960 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:28:11 np0005533252 podman[83567]: 2025-11-24 09:28:11.8058782 +0000 UTC m=+0.160681685 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:28:11 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 04:28:11 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 04:28:12 np0005533252 ceph-mon[80009]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 04:28:12 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:12 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:12 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:12 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 04:28:12 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Bus STARTING
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Serving on https://192.168.122.100:7150
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Client ('192.168.122.100', 34270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Serving on http://192.168.122.100:8765
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:28:12] ENGINE Bus STARTED
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:28:13 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 04:28:13 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:28:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 04:28:14 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:28:14 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 24 04:28:14 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 04:28:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:28:15 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 24 04:28:15 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 04:28:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 04:28:17 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:28:17 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:17 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:17 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:17 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 24 04:28:17 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:17 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 24 04:28:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:18 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:18 np0005533252 systemd[1]: Starting Ceph node-exporter.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:28:18 np0005533252 bash[84926]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 24 04:28:18 np0005533252 ceph-mon[80009]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 24 04:28:18 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 04:28:18 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 04:28:18 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 24 04:28:18 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 24 04:28:19 np0005533252 bash[84926]: Getting image source signatures
Nov 24 04:28:19 np0005533252 bash[84926]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 24 04:28:19 np0005533252 bash[84926]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 24 04:28:19 np0005533252 bash[84926]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 24 04:28:19 np0005533252 bash[84926]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 24 04:28:19 np0005533252 bash[84926]: Writing manifest to image destination
Nov 24 04:28:19 np0005533252 podman[84926]: 2025-11-24 09:28:19.751639929 +0000 UTC m=+1.110649436 container create 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:28:19 np0005533252 podman[84926]: 2025-11-24 09:28:19.737969786 +0000 UTC m=+1.096979323 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 24 04:28:19 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9fc3d7f2d63c23da242ef46afd56f4d2787380c398999b23fcc357c97e197be/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:19 np0005533252 podman[84926]: 2025-11-24 09:28:19.794878225 +0000 UTC m=+1.153887752 container init 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:28:19 np0005533252 podman[84926]: 2025-11-24 09:28:19.798686651 +0000 UTC m=+1.157696158 container start 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:28:19 np0005533252 bash[84926]: 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.805Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.806Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.806Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 04:28:19 np0005533252 systemd[1]: Started Ceph node-exporter.compute-1 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=dmi
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=entropy
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=os
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=pressure
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=rapl
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=selinux
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=stat
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=textfile
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=time
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=uname
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 24 04:28:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1[85002]: ts=2025-11-24T09:28:19.807Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 24 04:28:19 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 24 04:28:19 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 24 04:28:20 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:20 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:20 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:20 np0005533252 ceph-mon[80009]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 24 04:28:20 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 24 04:28:20 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 24 04:28:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 04:28:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='client.? 192.168.122.100:0/72635421' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:28:22 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 04:28:22 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 04:28:23 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Nov 24 04:28:23 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Nov 24 04:28:24 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 04:28:24 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 04:28:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:25 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 24 04:28:26 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 24 04:28:27 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 04:28:27 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 04:28:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 04:28:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 04:28:28 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 04:28:28 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 04:28:29 np0005533252 ceph-mon[80009]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 04:28:29 np0005533252 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-2.bbilht on compute-2
Nov 24 04:28:29 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 24 04:28:29 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e3 new map
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e3 print_map#012e3#012btime 2025-11-24T09:28:30:031773+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:11.441245+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.bbilht{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e4 new map
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-0.cibmfe on compute-0
Nov 24 04:28:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e4 print_map#012e4#012btime 2025-11-24T09:28:30:045188+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:30.045062+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.bbilht{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 24 04:28:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 04:28:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: daemon mds.cephfs.compute-2.bbilht assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: Cluster is now healthy
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: daemon mds.cephfs.compute-2.bbilht is now active in filesystem cephfs as rank 0
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e5 new map
Nov 24 04:28:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e5 print_map#012e5#012btime 2025-11-24T09:28:31:054777+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:31.054773+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 24 04:28:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 24 04:28:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: Deploying daemon mds.cephfs.compute-1.vpamdk on compute-1
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e6 new map
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e6 print_map#012e6#012btime 2025-11-24T09:28:32:078769+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:31.054773+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e7 new map
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e7 print_map#012e7#012btime 2025-11-24T09:28:32:111568+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:31.054773+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.126176392 +0000 UTC m=+0.038624370 container create edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 04:28:32 np0005533252 systemd[1]: Started libpod-conmon-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope.
Nov 24 04:28:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 24 04:28:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 24 04:28:32 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.109084123 +0000 UTC m=+0.021532121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.209598777 +0000 UTC m=+0.122046775 container init edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.220055659 +0000 UTC m=+0.132503637 container start edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.2232452 +0000 UTC m=+0.135693218 container attach edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:28:32 np0005533252 loving_mendeleev[85117]: 167 167
Nov 24 04:28:32 np0005533252 systemd[1]: libpod-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope: Deactivated successfully.
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.22603067 +0000 UTC m=+0.138478648 container died edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 04:28:32 np0005533252 systemd[1]: var-lib-containers-storage-overlay-18da3e36b28181836d4abc74ac5e3847093744d75c5fd85e00eae76586960327-merged.mount: Deactivated successfully.
Nov 24 04:28:32 np0005533252 podman[85101]: 2025-11-24 09:28:32.267167032 +0000 UTC m=+0.179615010 container remove edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mendeleev, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 04:28:32 np0005533252 systemd[1]: libpod-conmon-edb64c9b493d914c07b76f263b5b86c8eb6bbeb9353060e7eefa042a0fe671c6.scope: Deactivated successfully.
Nov 24 04:28:32 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:32 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:32 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:32 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:32 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:32 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:32 np0005533252 systemd[1]: Starting Ceph mds.cephfs.compute-1.vpamdk for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:28:33 np0005533252 podman[85257]: 2025-11-24 09:28:33.014485116 +0000 UTC m=+0.038831886 container create 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:28:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b26d9fb83e958cfca23c4285193430c8ea587492f6f39249914e79539341c402/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.vpamdk supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:33 np0005533252 podman[85257]: 2025-11-24 09:28:33.063415644 +0000 UTC m=+0.087762434 container init 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 04:28:33 np0005533252 podman[85257]: 2025-11-24 09:28:33.069088397 +0000 UTC m=+0.093435167 container start 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:28:33 np0005533252 bash[85257]: 892cb33fd9be2104c947a5c0e68b0dfb62e05fd0cb4570a0c2b4964c9a90a80a
Nov 24 04:28:33 np0005533252 podman[85257]: 2025-11-24 09:28:32.998181096 +0000 UTC m=+0.022527896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:28:33 np0005533252 systemd[1]: Started Ceph mds.cephfs.compute-1.vpamdk for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:28:33 np0005533252 ceph-mds[85277]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 04:28:33 np0005533252 ceph-mds[85277]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 24 04:28:33 np0005533252 ceph-mds[85277]: main not setting numa affinity
Nov 24 04:28:33 np0005533252 ceph-mds[85277]: pidfile_write: ignore empty --pid-file
Nov 24 04:28:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-1-vpamdk[85273]: starting mds.cephfs.compute-1.vpamdk at 
Nov 24 04:28:33 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 7 from mon.2
Nov 24 04:28:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 24 04:28:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.07326317 +0000 UTC m=+0.066130021 container create 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 04:28:34 np0005533252 systemd[1]: Started libpod-conmon-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope.
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr-rgw
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Bind address in nfs.cephfs.0.0.compute-1.vvoanr's ganesha conf is defaulting to empty
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: Deploying daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e8 new map
Nov 24 04:28:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e8 print_map#012e8#012btime 2025-11-24T09:28:34:108975+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:34.078187+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:34 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 8 from mon.2
Nov 24 04:28:34 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Monitors have assigned me to become a standby
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.046913748 +0000 UTC m=+0.039780689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:28:34 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.165526586 +0000 UTC m=+0.158393467 container init 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.175019775 +0000 UTC m=+0.167886626 container start 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.179461926 +0000 UTC m=+0.172328787 container attach 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 04:28:34 np0005533252 xenodochial_allen[85400]: 167 167
Nov 24 04:28:34 np0005533252 systemd[1]: libpod-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope: Deactivated successfully.
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.180681667 +0000 UTC m=+0.173548508 container died 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:28:34 np0005533252 systemd[1]: var-lib-containers-storage-overlay-98b261074ecf485173d59f0f0afafbfc1bde5fdc28831bffa7bdd8264a910678-merged.mount: Deactivated successfully.
Nov 24 04:28:34 np0005533252 podman[85384]: 2025-11-24 09:28:34.212124806 +0000 UTC m=+0.204991647 container remove 5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 24 04:28:34 np0005533252 systemd[1]: libpod-conmon-5911d1847a693fd6e7f537532b57040f83080e3143b0732b840898e6f8483089.scope: Deactivated successfully.
Nov 24 04:28:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 24 04:28:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 24 04:28:34 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:34 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:34 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:34 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:34 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:34 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:34 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:28:34 np0005533252 podman[85540]: 2025-11-24 09:28:34.994475979 +0000 UTC m=+0.036478287 container create 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:28:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:35 np0005533252 podman[85540]: 2025-11-24 09:28:35.037742286 +0000 UTC m=+0.079744604 container init 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 04:28:35 np0005533252 podman[85540]: 2025-11-24 09:28:35.043861239 +0000 UTC m=+0.085863547 container start 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 04:28:35 np0005533252 bash[85540]: 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66
Nov 24 04:28:35 np0005533252 podman[85540]: 2025-11-24 09:28:34.979896442 +0000 UTC m=+0.021898770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:28:35 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:28:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:35 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:28:35 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:35 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 04:28:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 24 04:28:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e9 new map
Nov 24 04:28:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e9 print_map#012e9#012btime 2025-11-24T09:28:36:500458+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:34.078187+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 24 04:28:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 24 04:28:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 new map
Nov 24 04:28:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 print_map#012e10#012btime 2025-11-24T09:28:37:509059+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T09:28:11.441245+0000#012modified#0112025-11-24T09:28:34.078187+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 04:28:37 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Updating MDS map to version 10 from mon.2
Nov 24 04:28:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:28:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:38 : epoch 69242543 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:28:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 24 04:28:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 24 04:28:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 04:28:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 04:28:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:28:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:28:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 24 04:28:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 24 04:28:39 np0005533252 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 04:28:39 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw
Nov 24 04:28:39 np0005533252 ceph-mon[80009]: Bind address in nfs.cephfs.1.0.compute-2.gkqxhl's ganesha conf is defaulting to empty
Nov 24 04:28:39 np0005533252 ceph-mon[80009]: Deploying daemon nfs.cephfs.1.0.compute-2.gkqxhl on compute-2
Nov 24 04:28:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 24 04:28:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 24 04:28:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:28:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:28:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:40 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 04:28:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 04:28:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 24 04:28:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 24 04:28:41 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex
Nov 24 04:28:41 np0005533252 ceph-mon[80009]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 24 04:28:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 24 04:28:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 24 04:28:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 24 04:28:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 24 04:28:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:43 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:28:43 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 04:28:43 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: Rados config object exists: conf-nfs.cephfs
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex-rgw
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: Bind address in nfs.cephfs.2.0.compute-0.ssprex's ganesha conf is defaulting to empty
Nov 24 04:28:44 np0005533252 ceph-mon[80009]: Deploying daemon nfs.cephfs.2.0.compute-0.ssprex on compute-0
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:45 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:28:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:46 : epoch 69242543 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:28:46 np0005533252 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-1.rsdpvy on compute-1
Nov 24 04:28:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.098313539 +0000 UTC m=+2.366985295 container create 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 systemd[1]: Started libpod-conmon-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope.
Nov 24 04:28:48 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.085263343 +0000 UTC m=+2.353935119 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.20737026 +0000 UTC m=+2.476042036 container init 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.214608735 +0000 UTC m=+2.483280491 container start 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.217655723 +0000 UTC m=+2.486327479 container attach 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 pensive_darwin[85816]: 0 0
Nov 24 04:28:48 np0005533252 systemd[1]: libpod-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope: Deactivated successfully.
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.220013294 +0000 UTC m=+2.488685050 container died 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 systemd[1]: var-lib-containers-storage-overlay-a279e187ea2889cf7c395da58f98689d2a82206cd27a240d2862838f73f6d04e-merged.mount: Deactivated successfully.
Nov 24 04:28:48 np0005533252 podman[85698]: 2025-11-24 09:28:48.254417147 +0000 UTC m=+2.523088903 container remove 59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba (image=quay.io/ceph/haproxy:2.3, name=pensive_darwin)
Nov 24 04:28:48 np0005533252 systemd[1]: libpod-conmon-59beef65fd529a919f4605f2f0d2748e741e5cf4f4a4420fd7498c9ebb56efba.scope: Deactivated successfully.
Nov 24 04:28:48 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:48 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:48 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:48 np0005533252 systemd[1]: Reloading.
Nov 24 04:28:48 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:28:48 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:28:48 np0005533252 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.rsdpvy for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:28:49 np0005533252 podman[85960]: 2025-11-24 09:28:49.045901083 +0000 UTC m=+0.041185379 container create 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:28:49 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf1367ea48cd2363374826f2dcdfb82f0ff9b3deac3686e1503c26387afee3/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 24 04:28:49 np0005533252 podman[85960]: 2025-11-24 09:28:49.103897762 +0000 UTC m=+0.099182068 container init 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:28:49 np0005533252 podman[85960]: 2025-11-24 09:28:49.108819268 +0000 UTC m=+0.104103544 container start 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:28:49 np0005533252 bash[85960]: 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914
Nov 24 04:28:49 np0005533252 podman[85960]: 2025-11-24 09:28:49.024850531 +0000 UTC m=+0.020134837 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 04:28:49 np0005533252 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.rsdpvy for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:28:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/092849 (2) : New worker #1 (4) forked
Nov 24 04:28:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:49 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f943c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:49 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:49 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:49 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:50 np0005533252 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-0.jzeayf on compute-0
Nov 24 04:28:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:51 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:52 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9418000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:53 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9414000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:53 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:53 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:53 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:53 np0005533252 ceph-mon[80009]: Deploying daemon haproxy.nfs.cephfs.compute-2.jwgmiu on compute-2
Nov 24 04:28:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:54 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9438001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:55 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:56 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94180016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:57 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:57 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94380025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 04:28:58 np0005533252 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-2.gcugek on compute-2
Nov 24 04:28:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:58 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:28:59 np0005533252 kernel: ganesha.nfsd[85601]: segfault at 50 ip 00007f94ea48332e sp 00007f94b57f9210 error 4 in libntirpc.so.5.8[7f94ea468000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 04:28:59 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:28:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[85556]: 24/11/2025 09:28:59 : epoch 69242543 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9430001c00 fd 37 proxy ignored for local
Nov 24 04:28:59 np0005533252 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 24 04:28:59 np0005533252 systemd[1]: Started Process Core Dump (PID 85992/UID 0).
Nov 24 04:29:00 np0005533252 systemd-coredump[85993]: Process 85560 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f94ea48332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:29:00 np0005533252 systemd[1]: systemd-coredump@0-85992-0.service: Deactivated successfully.
Nov 24 04:29:00 np0005533252 systemd[1]: systemd-coredump@0-85992-0.service: Consumed 1.162s CPU time.
Nov 24 04:29:00 np0005533252 podman[85998]: 2025-11-24 09:29:00.432230178 +0000 UTC m=+0.026200124 container died 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:29:00 np0005533252 systemd[1]: var-lib-containers-storage-overlay-7e103a9c85ba1f18b9876ac5fd84e521bcc04dd1a1bcd486937b84d56161b6ec-merged.mount: Deactivated successfully.
Nov 24 04:29:00 np0005533252 podman[85998]: 2025-11-24 09:29:00.468952741 +0000 UTC m=+0.062922677 container remove 9e9cdce3dd47e25abe641cb56f8d264ce6aac97a311540e0d883cb63fbd43f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:29:00 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:29:00 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:29:00 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.363s CPU time.
Nov 24 04:29:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:03 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 04:29:04 np0005533252 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-1.vrgskq on compute-1
Nov 24 04:29:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/092905 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.808874377 +0000 UTC m=+2.575281473 container create 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2)
Nov 24 04:29:06 np0005533252 systemd[1]: Started libpod-conmon-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope.
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.793158784 +0000 UTC m=+2.559565910 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 04:29:06 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.876601306 +0000 UTC m=+2.643008422 container init 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=)
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.88882016 +0000 UTC m=+2.655227246 container start 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.89195526 +0000 UTC m=+2.658362386 container attach 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 04:29:06 np0005533252 blissful_mclean[86228]: 0 0
Nov 24 04:29:06 np0005533252 systemd[1]: libpod-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope: Deactivated successfully.
Nov 24 04:29:06 np0005533252 conmon[86228]: conmon 5eb0ba04212e70e1a441 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope/container/memory.events
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.895593104 +0000 UTC m=+2.662000200 container died 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Nov 24 04:29:06 np0005533252 systemd[1]: var-lib-containers-storage-overlay-98179c9c59caec17d6eae17d5473404a690072617931b347a9b2c927e84b6e18-merged.mount: Deactivated successfully.
Nov 24 04:29:06 np0005533252 podman[86133]: 2025-11-24 09:29:06.927684308 +0000 UTC m=+2.694091404 container remove 5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f (image=quay.io/ceph/keepalived:2.2.4, name=blissful_mclean, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=)
Nov 24 04:29:06 np0005533252 systemd[1]: libpod-conmon-5eb0ba04212e70e1a44153e1ce1ce62d75f89d0038752e93195211af5f44406f.scope: Deactivated successfully.
Nov 24 04:29:06 np0005533252 systemd[1]: Reloading.
Nov 24 04:29:07 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:29:07 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:29:07 np0005533252 systemd[1]: Reloading.
Nov 24 04:29:07 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:29:07 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:29:07 np0005533252 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.vrgskq for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:29:07 np0005533252 podman[86371]: 2025-11-24 09:29:07.737192466 +0000 UTC m=+0.044474983 container create b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4)
Nov 24 04:29:07 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6ef1b45ae9962e33169771080c20b4cc6b2ef58b42546b8e18fc48898d55a/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:29:07 np0005533252 podman[86371]: 2025-11-24 09:29:07.785769424 +0000 UTC m=+0.093051961 container init b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.buildah.version=1.28.2, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Nov 24 04:29:07 np0005533252 podman[86371]: 2025-11-24 09:29:07.792501757 +0000 UTC m=+0.099784274 container start b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793)
Nov 24 04:29:07 np0005533252 bash[86371]: b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866
Nov 24 04:29:07 np0005533252 podman[86371]: 2025-11-24 09:29:07.718187659 +0000 UTC m=+0.025470216 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 04:29:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:07 np0005533252 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.vrgskq for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Starting VRRP child process, pid=4
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: Startup complete
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: (VI_0) Entering BACKUP STATE (init)
Nov 24 04:29:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:07 2025: VRRP_Script(check_backend) succeeded
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 04:29:08 np0005533252 ceph-mon[80009]: Deploying daemon keepalived.nfs.cephfs.compute-0.mglptr on compute-0
Nov 24 04:29:10 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 1.
Nov 24 04:29:10 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:29:10 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.363s CPU time.
Nov 24 04:29:10 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:29:10 np0005533252 podman[86442]: 2025-11-24 09:29:10.939858439 +0000 UTC m=+0.042883331 container create f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:29:10 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:29:10 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:29:10 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:29:10 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:29:10 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:29:10 np0005533252 podman[86442]: 2025-11-24 09:29:10.996142425 +0000 UTC m=+0.099167317 container init f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 04:29:11 np0005533252 podman[86442]: 2025-11-24 09:29:11.000512857 +0000 UTC m=+0.103537749 container start f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 04:29:11 np0005533252 bash[86442]: f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c
Nov 24 04:29:11 np0005533252 podman[86442]: 2025-11-24 09:29:10.917892506 +0000 UTC m=+0.020917418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:29:11 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:29:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Entering MASTER STATE
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Nov 24 04:29:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq[86386]: Mon Nov 24 09:29:11 2025: (VI_0) Entering BACKUP STATE
Nov 24 04:29:12 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:29:12 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:29:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 04:29:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 04:29:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:29:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:29:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:29:14 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:29:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 04:29:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 04:29:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:29:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 04:29:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:15 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 56 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=41/42 n=8 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.097949028s) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 42'47 active pruub 170.755828857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 56 pg[10.0( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.097949028s) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 0'0 unknown pruub 170.755828857s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: Regenerating cephadm self-signed grafana TLS certificates
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: Deploying daemon grafana.compute-0 on compute-0
Nov 24 04:29:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.7( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.10( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.12( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.11( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.19( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.18( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.6( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.5( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.4( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.8( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.b( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.d( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.3( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.9( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.a( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.c( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.f( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.e( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.2( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.13( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.14( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.15( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.16( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.17( v 42'48 lc 0'0 (0'0,42'48] local-lis/les=41/42 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1c( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1a( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1d( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.6( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.3( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.7( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.a( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.d( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.0( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 42'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.c( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.9( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.16( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.14( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.15( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.17( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 57 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 04:29:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 04:29:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:29:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:29:17 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 04:29:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 58 pg[12.0( v 57'56 (0'0,57'56] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.413920403s) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 57'55 active pruub 174.163131714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 58 pg[12.0( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.413920403s) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 0'0 unknown pruub 174.163131714s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:17 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 04:29:17 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 04:29:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:18 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 04:29:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.11( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.10( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.13( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.15( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.4( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.7( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.6( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.9( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.8( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.12( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.a( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.c( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.f( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.e( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.b( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.d( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.5( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.2( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.3( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1f( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1e( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1c( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1a( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1b( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.19( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.18( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.16( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.17( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.14( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1d( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1( v 57'56 (0'0,57'56] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.10( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.15( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.f( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.5( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.0( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 57'55 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1f( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.16( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.14( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 59 pg[12.1( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 04:29:18 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 04:29:19 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.10 deep-scrub starts
Nov 24 04:29:19 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.10 deep-scrub ok
Nov 24 04:29:20 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 04:29:20 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 04:29:20 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 24 04:29:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.422721863s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.777389526s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.10( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425310135s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 59'58 active pruub 180.779983521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.11( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.422682762s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.777389526s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.10( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425265312s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 0'0 unknown NOTIFY pruub 180.779983521s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.15( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341501236s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.696243286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.15( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341470718s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.696243286s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425135612s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780044556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.13( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425113678s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780044556s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.14( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341238976s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.696228027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425786018s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780792236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.14( v 59'51 (0'0,59'51] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341207504s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.696228027s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.425769806s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780792236s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341078758s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696151733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424939156s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780136108s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.341030121s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696151733s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.4( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424919128s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340937614s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696228027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340916634s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696228027s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424738884s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780151367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.7( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424723625s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780151367s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424711227s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780151367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.6( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424695969s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780151367s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340555191s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696151733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.f( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340542793s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696151733s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424826622s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780471802s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.9( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424814224s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780471802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424716949s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780517578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424701691s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780517578s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424571037s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780532837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424633980s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780593872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.a( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424552917s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780532837s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340095520s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.696090698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424615860s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780593872s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.340083122s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.696090698s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424512863s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780670166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339606285s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695755005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424450874s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780624390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424436569s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780624390s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339557648s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695755005s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424476624s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780670166s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.3( v 59'51 (0'0,59'51] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339503288s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 59'50 active pruub 178.695785522s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.3( v 59'51 (0'0,59'51] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339451790s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 59'50 mlcod 0'0 unknown NOTIFY pruub 178.695785522s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424294472s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780670166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339303017s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695693970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424245834s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.780700684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.4( v 42'48 (0'0,42'48] local-lis/les=56/57 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339271545s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695693970s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.3( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424232483s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780700684s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338710785s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695205688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424853325s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.2( v 57'56 (0'0,57'56] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424114227s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.780670166s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1e( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424824715s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781356812s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338671684s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695205688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339028358s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695739746s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338397026s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695144653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.339013100s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695739746s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338379860s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695144653s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424505234s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424476624s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781356812s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1a( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.423853874s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 59'58 active pruub 180.780807495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.338031769s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695007324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424389839s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781433105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1e( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337989807s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695007324s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424373627s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781433105s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1a( v 59'59 (0'0,59'59] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.423757553s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'57 lcod 59'58 mlcod 0'0 unknown NOTIFY pruub 180.780807495s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337868690s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695007324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.10( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337856293s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695007324s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337847710s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695022583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.11( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337830544s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695022583s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424319267s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781524658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337912560s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.695159912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.12( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.337898254s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.695159912s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.17( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424275398s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781524658s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424061775s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781417847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424158096s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active pruub 180.781539917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.332690239s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active pruub 178.690093994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.1d( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424140930s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781539917s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[12.18( v 57'56 (0'0,57'56] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=11.424025536s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.781417847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=56/57 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.332671165s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.690093994s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 04:29:22 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.17( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.14( v 38'12 lc 38'2 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.8( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.18( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.1b( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.4( v 38'12 (0'0,38'12] local-lis/les=60/61 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.12( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.10( v 38'12 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=60/61 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60) [1] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 61 pg[8.19( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=60/61 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60) [1] r=0 lpr=60 pi=[54,60)/1 crt=38'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:23 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:24 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 24 04:29:24 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 24 04:29:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 04:29:24 np0005533252 ceph-mon[80009]: Deploying daemon haproxy.rgw.default.compute-0.fxvlbj on compute-0
Nov 24 04:29:24 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 04:29:25 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Nov 24 04:29:25 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Nov 24 04:29:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000050s ======
Nov 24 04:29:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 24 04:29:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 04:29:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:25 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:26 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.0 deep-scrub starts
Nov 24 04:29:26 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.0 deep-scrub ok
Nov 24 04:29:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: Deploying daemon haproxy.rgw.default.compute-2.tariiq on compute-2
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 04:29:27 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Nov 24 04:29:27 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.798350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567798489, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6941, "num_deletes": 258, "total_data_size": 18480495, "memory_usage": 19237136, "flush_reason": "Manual Compaction"}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567854157, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11751717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6946, "table_properties": {"data_size": 11725316, "index_size": 16686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 83905, "raw_average_key_size": 24, "raw_value_size": 11659115, "raw_average_value_size": 3392, "num_data_blocks": 733, "num_entries": 3437, "num_filter_entries": 3437, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 1763976422, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55861 microseconds, and 22776 cpu microseconds.
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.854217) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11751717 bytes OK
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.854241) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861434) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861453) EVENT_LOG_v1 {"time_micros": 1763976567861449, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.861471) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18443350, prev total WAL file size 18443350, number of live WAL files 2.
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.865008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567865099, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11753365, "oldest_snapshot_seqno": -1}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3183 keys, 11748211 bytes, temperature: kUnknown
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567917121, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11748211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11722439, "index_size": 16702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 80386, "raw_average_key_size": 25, "raw_value_size": 11659395, "raw_average_value_size": 3663, "num_data_blocks": 732, "num_entries": 3183, "num_filter_entries": 3183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.917397) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11748211 bytes
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.922558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.6 rd, 225.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.2, 0.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3442, records dropped: 259 output_compression: NoCompression
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.922612) EVENT_LOG_v1 {"time_micros": 1763976567922591, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52106, "compaction_time_cpu_micros": 21904, "output_level": 6, "num_output_files": 1, "total_output_size": 11748211, "num_input_records": 3442, "num_output_records": 3183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567925188, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567925246, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 04:29:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:27.864909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000050s ======
Nov 24 04:29:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:28.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 24 04:29:28 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 04:29:28 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 04:29:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: Deploying daemon keepalived.rgw.default.compute-2.atxclo on compute-2
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 04:29:29 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Nov 24 04:29:29 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Nov 24 04:29:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 04:29:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: Deploying daemon keepalived.rgw.default.compute-0.zrpppr on compute-0
Nov 24 04:29:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 04:29:30 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 04:29:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 04:29:31 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 04:29:31 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:31 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 04:29:31 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 24 04:29:31 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 24 04:29:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 04:29:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: Deploying daemon prometheus.compute-0 on compute-0
Nov 24 04:29:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 04:29:32 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 04:29:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc000fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 04:29:33 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 04:29:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 04:29:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 24 04:29:34 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 24 04:29:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 04:29:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:34 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/092935 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:29:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 24 04:29:35 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 24 04:29:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:36 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Nov 24 04:29:36 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Nov 24 04:29:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:36 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 04:29:37 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 04:29:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:37.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:38 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 24 04:29:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Nov 24 04:29:38 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Nov 24 04:29:38 np0005533252 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 04:29:38 np0005533252 systemd[1]: session-34.scope: Consumed 17.642s CPU time.
Nov 24 04:29:38 np0005533252 systemd-logind[823]: Session 34 logged out. Waiting for processes to exit.
Nov 24 04:29:38 np0005533252 systemd-logind[823]: Removed session 34.
Nov 24 04:29:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setuser ceph since I am not root
Nov 24 04:29:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: ignoring --setgroup ceph since I am not root
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: pidfile_write: ignore empty --pid-file
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'alerts'
Nov 24 04:29:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:38.875+0000 7fd8715da140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'balancer'
Nov 24 04:29:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:38 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:38.961+0000 7fd8715da140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 04:29:38 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'cephadm'
Nov 24 04:29:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 04:29:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72) [1] r=0 lpr=72 pi=[54,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.559461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579559488, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 685, "num_deletes": 251, "total_data_size": 1402688, "memory_usage": 1461600, "flush_reason": "Manual Compaction"}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579576752, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 909671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6951, "largest_seqno": 7631, "table_properties": {"data_size": 906015, "index_size": 1372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9601, "raw_average_key_size": 20, "raw_value_size": 898100, "raw_average_value_size": 1906, "num_data_blocks": 61, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976568, "oldest_key_time": 1763976568, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17328 microseconds, and 3201 cpu microseconds.
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.576786) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 909671 bytes OK
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.576805) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579779) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579794) EVENT_LOG_v1 {"time_micros": 1763976579579790, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.579809) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1398662, prev total WAL file size 1398662, number of live WAL files 2.
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.580344) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(888KB)], [15(11MB)]
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579580372, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12657882, "oldest_snapshot_seqno": -1}
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 04:29:39 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 04:29:39 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'crash'
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3129 keys, 11433137 bytes, temperature: kUnknown
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579712452, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11433137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11408170, "index_size": 16026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 80873, "raw_average_key_size": 25, "raw_value_size": 11346347, "raw_average_value_size": 3626, "num_data_blocks": 696, "num_entries": 3129, "num_filter_entries": 3129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.712692) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11433137 bytes
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.745477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.8 rd, 86.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(26.5) write-amplify(12.6) OK, records in: 3654, records dropped: 525 output_compression: NoCompression
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.745516) EVENT_LOG_v1 {"time_micros": 1763976579745501, "job": 6, "event": "compaction_finished", "compaction_time_micros": 132164, "compaction_time_cpu_micros": 21930, "output_level": 6, "num_output_files": 1, "total_output_size": 11433137, "num_input_records": 3654, "num_output_records": 3129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579745794, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579747530, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.580273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:29:39.747606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:29:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:39.782+0000 7fd8715da140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:29:39 np0005533252 ceph-mgr[80316]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 04:29:39 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'dashboard'
Nov 24 04:29:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'devicehealth'
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.384+0000 7fd8715da140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 04:29:40 np0005533252 ceph-mon[80009]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]:  from numpy import show_config as show_numpy_config
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.552+0000 7fd8715da140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'influx'
Nov 24 04:29:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=-1 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 24 04:29:40 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.622+0000 7fd8715da140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'insights'
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'iostat'
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:40.763+0000 7fd8715da140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 04:29:40 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'k8sevents'
Nov 24 04:29:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:40 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14000df0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'localpool'
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'mirroring'
Nov 24 04:29:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae80016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'nfs'
Nov 24 04:29:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 04:29:41 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 04:29:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:41.788+0000 7fd8715da140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 04:29:41 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'orchestrator'
Nov 24 04:29:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:41.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.010+0000 7fd8715da140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.088+0000 7fd8715da140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'osd_support'
Nov 24 04:29:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.159+0000 7fd8715da140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.241+0000 7fd8715da140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'progress'
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.312+0000 7fd8715da140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'prometheus'
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.666+0000 7fd8715da140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rbd_support'
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:42.772+0000 7fd8715da140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'restful'
Nov 24 04:29:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:29:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:42 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rgw'
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.207+0000 7fd8715da140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'rook'
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb140091b0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.771+0000 7fd8715da140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'selftest'
Nov 24 04:29:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 76 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75) [1] r=0 lpr=75 pi=[54,75)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:29:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.844+0000 7fd8715da140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'snap_schedule'
Nov 24 04:29:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:43.924+0000 7fd8715da140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'stats'
Nov 24 04:29:43 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'status'
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.074+0000 7fd8715da140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telegraf'
Nov 24 04:29:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.149+0000 7fd8715da140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'telemetry'
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.311+0000 7fd8715da140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.526+0000 7fd8715da140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'volumes'
Nov 24 04:29:44 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 24 04:29:44 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.793+0000 7fd8715da140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Loading python module 'zabbix'
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 2025-11-24T09:29:44.864+0000 7fd8715da140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr load Constructed class from module: dashboard
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: mgr load Constructed class from module: prometheus
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO root] Starting engine...
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Bus STARTING
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Starting engine...
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Bus STARTING
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: CherryPy Checker:
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: The Application mounted at '' has an empty config.
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: 
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: ms_deliver_dispatch: unhandled message 0x557960f73860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:44 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [dashboard INFO root] Engine started...
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Serving on http://:::9283
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Serving on http://:::9283
Nov 24 04:29:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-1-qelqsg[80312]: [24/Nov/2025:09:29:44] ENGINE Bus STARTED
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:44] ENGINE Bus STARTED
Nov 24 04:29:44 np0005533252 ceph-mgr[80316]: [prometheus INFO root] Engine started.
Nov 24 04:29:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 0
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).mds e10 all = 1
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 04:29:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 04:29:45 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Nov 24 04:29:45 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Nov 24 04:29:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:45 np0005533252 systemd-logind[823]: New session 36 of user ceph-admin.
Nov 24 04:29:45 np0005533252 systemd[1]: Started Session 36 of User ceph-admin.
Nov 24 04:29:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:45.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: Active manager daemon compute-0.mauvni restarted
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: Activating manager daemon compute-0.mauvni
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: Manager daemon compute-0.mauvni is now available
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 04:29:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 04:29:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:46 np0005533252 podman[86710]: 2025-11-24 09:29:46.499255353 +0000 UTC m=+0.056927709 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 04:29:46 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Nov 24 04:29:46 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Nov 24 04:29:46 np0005533252 podman[86710]: 2025-11-24 09:29:46.594702538 +0000 UTC m=+0.152374894 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:29:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:46 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:47 np0005533252 podman[86847]: 2025-11-24 09:29:47.058463627 +0000 UTC m=+0.050147566 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Bus STARTING
Nov 24 04:29:47 np0005533252 podman[86872]: 2025-11-24 09:29:47.120560799 +0000 UTC m=+0.049280873 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:29:47 np0005533252 podman[86847]: 2025-11-24 09:29:47.124949202 +0000 UTC m=+0.116633111 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:29:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 04:29:47 np0005533252 podman[86921]: 2025-11-24 09:29:47.355190615 +0000 UTC m=+0.062258549 container exec f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 04:29:47 np0005533252 podman[86921]: 2025-11-24 09:29:47.368875608 +0000 UTC m=+0.075943522 container exec_died f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 04:29:47 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 04:29:47 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 04:29:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:47 np0005533252 podman[86986]: 2025-11-24 09:29:47.601079881 +0000 UTC m=+0.056181892 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:29:47 np0005533252 podman[86986]: 2025-11-24 09:29:47.613835341 +0000 UTC m=+0.068937312 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:47 np0005533252 podman[87049]: 2025-11-24 09:29:47.820194666 +0000 UTC m=+0.046893572 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Nov 24 04:29:47 np0005533252 podman[87049]: 2025-11-24 09:29:47.83469601 +0000 UTC m=+0.061394916 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.tags=Ceph keepalived)
Nov 24 04:29:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:29:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Serving on http://192.168.122.100:8765
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Serving on https://192.168.122.100:7150
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Bus STARTED
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: [24/Nov/2025:09:29:46] ENGINE Client ('192.168.122.100', 37806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:48.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 04:29:48 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 24 04:29:48 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 04:29:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:48 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:29:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 24 04:29:49 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 04:29:49 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 24 04:29:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:49 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 24 04:29:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:29:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:50.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:29:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:29:50 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 04:29:50 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 04:29:50 np0005533252 systemd-logind[823]: New session 37 of user zuul.
Nov 24 04:29:50 np0005533252 systemd[1]: Started Session 37 of User zuul.
Nov 24 04:29:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:50 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 04:29:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 04:29:51 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 24 04:29:51 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 24 04:29:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:51 np0005533252 python3.9[87782]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:29:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:52.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:29:52 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 24 04:29:52 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:29:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:52 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:29:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:53 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 04:29:53 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 04:29:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:29:53 np0005533252 python3.9[88493]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:29:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 04:29:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:54.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:54 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:29:54 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 04:29:54 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 04:29:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:54 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:55 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 04:29:55 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 04:29:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:56 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 04:29:56 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 04:29:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:56 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 04:29:57 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Nov 24 04:29:57 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Nov 24 04:29:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 04:29:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:29:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:29:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:29:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 04:29:58 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 04:29:58 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:29:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:29:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:58 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 04:29:59 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 04:29:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:29:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:29:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 04:29:59 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 87 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:59 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 87 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87) [1] r=0 lpr=87 pi=[54,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:29:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:29:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:29:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: overall HEALTH_OK
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 88 pg[9.a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[54,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:00 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:01 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 04:30:01 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: Reconfiguring mgr.compute-0.mauvni (monmap changed)...
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:02 np0005533252 systemd[1]: session-37.scope: Deactivated successfully.
Nov 24 04:30:02 np0005533252 systemd[1]: session-37.scope: Consumed 7.897s CPU time.
Nov 24 04:30:02 np0005533252 systemd-logind[823]: Session 37 logged out. Waiting for processes to exit.
Nov 24 04:30:02 np0005533252 systemd-logind[823]: Removed session 37.
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: Reconfiguring osd.0 (monmap changed)...
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: Reconfiguring daemon osd.0 on compute-0
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 04:30:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:02 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:02 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:02 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:02 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:02 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: Reconfiguring rgw.rgw.compute-0.zlrxyg (unknown last config time)...
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: Reconfiguring daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 04:30:03 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 91 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:03 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 91 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:04.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:04 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 04:30:04 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 04:30:04 np0005533252 ceph-mon[80009]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 24 04:30:04 np0005533252 ceph-mon[80009]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 24 04:30:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:04 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:05 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 04:30:05 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:05 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:06 np0005533252 ceph-mon[80009]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 24 04:30:06 np0005533252 ceph-mon[80009]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 24 04:30:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:06 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 24 04:30:07 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 04:30:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 04:30:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:08 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 04:30:08 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:08 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 04:30:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.239632268 +0000 UTC m=+0.038733711 container create 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 04:30:09 np0005533252 systemd[1]: Started libpod-conmon-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope.
Nov 24 04:30:09 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.220881874 +0000 UTC m=+0.019983327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.326188202 +0000 UTC m=+0.125289675 container init 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.332227217 +0000 UTC m=+0.131328660 container start 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.335369058 +0000 UTC m=+0.134470501 container attach 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 04:30:09 np0005533252 clever_thompson[88690]: 167 167
Nov 24 04:30:09 np0005533252 systemd[1]: libpod-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope: Deactivated successfully.
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.349242116 +0000 UTC m=+0.148343559 container died 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 04:30:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay-071a74f7b29a3951ec7097108bf579a9545642a78f247652166c2d5c14206ba1-merged.mount: Deactivated successfully.
Nov 24 04:30:09 np0005533252 podman[88673]: 2025-11-24 09:30:09.410098857 +0000 UTC m=+0.209200310 container remove 810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 04:30:09 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 24 04:30:09 np0005533252 systemd[1]: libpod-conmon-810a29e468c068dc53ed0dcb6d8db177b483bc9c1d39e5bd4dbc0de043f9b009.scope: Deactivated successfully.
Nov 24 04:30:09 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:09 np0005533252 podman[88774]: 2025-11-24 09:30:09.933521777 +0000 UTC m=+0.055029922 container create a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 04:30:09 np0005533252 systemd[1]: Started libpod-conmon-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope.
Nov 24 04:30:09 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:09.914892185 +0000 UTC m=+0.036400320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:10.026112986 +0000 UTC m=+0.147621191 container init a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:10.035540809 +0000 UTC m=+0.157048964 container start a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 04:30:10 np0005533252 vigorous_lichterman[88790]: 167 167
Nov 24 04:30:10 np0005533252 systemd[1]: libpod-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope: Deactivated successfully.
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:10.041581785 +0000 UTC m=+0.163089980 container attach a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:10.042136489 +0000 UTC m=+0.163644614 container died a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:30:10 np0005533252 systemd[1]: var-lib-containers-storage-overlay-650615f4fc28d1f143e251d8662732aa776bd1b2c36a0113362c109be95cbaf1-merged.mount: Deactivated successfully.
Nov 24 04:30:10 np0005533252 podman[88774]: 2025-11-24 09:30:10.131773383 +0000 UTC m=+0.253281498 container remove a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_lichterman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:30:10 np0005533252 systemd[1]: libpod-conmon-a0f40af6dbc69dae8b3f149463d97a897a5bfea0a004958cad67ddf65a534638.scope: Deactivated successfully.
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: Reconfiguring osd.1 (monmap changed)...
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: Reconfiguring daemon osd.1 on compute-1
Nov 24 04:30:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 04:30:10 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.759760691 +0000 UTC m=+0.036172005 container create 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 04:30:10 np0005533252 systemd[1]: Started libpod-conmon-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope.
Nov 24 04:30:10 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.82910636 +0000 UTC m=+0.105517674 container init 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.835110395 +0000 UTC m=+0.111521709 container start 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:30:10 np0005533252 gifted_curie[88899]: 167 167
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.838612986 +0000 UTC m=+0.115024310 container attach 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 04:30:10 np0005533252 systemd[1]: libpod-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope: Deactivated successfully.
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.839773166 +0000 UTC m=+0.116184510 container died 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.744825365 +0000 UTC m=+0.021236709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:30:10 np0005533252 systemd[1]: var-lib-containers-storage-overlay-480b8e5e575b99323c93049fa49456f2ad259b10b3f094cc3dd84886096e0169-merged.mount: Deactivated successfully.
Nov 24 04:30:10 np0005533252 podman[88883]: 2025-11-24 09:30:10.871829563 +0000 UTC m=+0.148240867 container remove 65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:30:10 np0005533252 systemd[1]: libpod-conmon-65d629c135d605f86361d63ab0b7424b7e54b6d77df33ce2028395405ef5fbd0.scope: Deactivated successfully.
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:10 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 04:30:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 04:30:11 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 04:30:11 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 04:30:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 94 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=94) [1] r=0 lpr=94 pi=[70,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:12 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 94 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=94) [1] r=0 lpr=94 pi=[70,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:12 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 04:30:12 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 24 04:30:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:12 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 04:30:13 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:13 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:13 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:13 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[70,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: Reconfiguring mgr.compute-2.rzcnzg (monmap changed)...
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: Reconfiguring daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 24 04:30:13 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 04:30:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 04:30:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:14.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:14 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 04:30:14 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 04:30:14 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 04:30:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:14 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 04:30:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:15 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 97 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:30:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:15.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:30:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:30:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:16.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:16 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=98) [1] r=0 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=98) [1] r=0 lpr=98 pi=[66,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 98 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=8 ec=54/39 lis/c=95/70 les/c/f=96/71/0 sis=97) [1] r=0 lpr=97 pi=[70,97)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 04:30:16 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 04:30:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:16 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 04:30:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 99 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=99) [1]/[2] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 04:30:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:18 np0005533252 systemd-logind[823]: New session 38 of user zuul.
Nov 24 04:30:18 np0005533252 systemd[1]: Started Session 38 of User zuul.
Nov 24 04:30:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 04:30:18 np0005533252 python3.9[89072]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 04:30:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:18 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 04:30:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 101 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:20 np0005533252 python3.9[89272]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:30:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:20.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:20 np0005533252 systemd[83435]: Starting Mark boot as successful...
Nov 24 04:30:20 np0005533252 systemd[83435]: Finished Mark boot as successful.
Nov 24 04:30:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 04:30:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 102 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=7 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 102 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=99/66 les/c/f=100/67/0 sis=101) [1] r=0 lpr=101 pi=[66,101)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:30:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:30:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:20 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 04:30:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 04:30:21 np0005533252 ceph-osd[77497]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 04:30:21 np0005533252 python3.9[89454]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:30:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 04:30:21 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 103 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=103) [1] r=0 lpr=103 pi=[54,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:22 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 04:30:22 np0005533252 python3.9[89608]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:30:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 04:30:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 104 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[54,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 104 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[54,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:22 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 24 04:30:23 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 04:30:23 np0005533252 python3.9[89762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:30:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 04:30:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 105 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=105) [1] r=0 lpr=105 pi=[54,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:23 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 04:30:23 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 04:30:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:24 np0005533252 python3.9[89915]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:30:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 04:30:24 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:24 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[54,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:24 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:24 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 106 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=-1 lpr=106 pi=[54,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 04:30:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:24 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:25 np0005533252 python3.9[90065]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:30:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:25 np0005533252 network[90082]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:30:25 np0005533252 network[90083]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:30:25 np0005533252 network[90084]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 04:30:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 04:30:25 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 107 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=106/107 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106) [1] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:25 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 107 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=107) [1] r=0 lpr=107 pi=[54,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 04:30:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 04:30:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:25.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 04:30:26 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:26 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:26 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[54,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:26 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 108 pg[9.12( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[54,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:26 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 04:30:27 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 109 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=108/109 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108) [1] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 04:30:28 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:28 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:28.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:28 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 04:30:29 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 111 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=110/111 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110) [1] r=0 lpr=110 pi=[54,110)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:29 np0005533252 python3.9[90346]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:30:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 04:30:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:30.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 04:30:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:30:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:30:30 np0005533252 python3.9[90497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:30:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 24 04:30:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 04:30:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:31.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:31 np0005533252 python3.9[90652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:30:32 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 04:30:32 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 04:30:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 04:30:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 04:30:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:33 np0005533252 python3.9[90811]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:30:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 24 04:30:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 04:30:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:34 np0005533252 python3.9[90896]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:30:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 04:30:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 04:30:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:34.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 04:30:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:35 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 04:30:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 24 04:30:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 04:30:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 04:30:36 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 114 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=114) [1] r=0 lpr=114 pi=[70,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:36 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 04:30:36 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 04:30:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 04:30:37 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 115 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:37 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 115 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=70/70 les/c/f=71/71/0 sis=115) [1]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:37 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 04:30:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:37.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 04:30:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 04:30:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 117 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:39 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 117 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 04:30:40 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 118 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=117/118 n=4 ec=54/39 lis/c=115/70 les/c/f=116/71/0 sis=117) [1] r=0 lpr=117 pi=[70,117)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 24 04:30:41 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 04:30:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 04:30:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:41 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 04:30:41 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 04:30:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:42 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 04:30:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 04:30:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 119 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=119 pruub=12.948238373s) [2] r=-1 lpr=119 pi=[75,119)/1 crt=45'1130 mlcod 0'0 active pruub 262.337310791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:42 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 119 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=119 pruub=12.948196411s) [2] r=-1 lpr=119 pi=[75,119)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 262.337310791s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 04:30:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 120 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:43 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 120 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 04:30:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:44.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:44 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 04:30:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:30:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:30:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 04:30:45 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 121 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=120) [2]/[1] async=[2] r=0 lpr=120 pi=[75,120)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:46.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 04:30:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 04:30:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 04:30:46 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 122 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=120/75 les/c/f=121/76/0 sis=122 pruub=14.998853683s) [2] async=[2] r=-1 lpr=122 pi=[75,122)/1 crt=45'1130 mlcod 45'1130 active pruub 268.048889160s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:46 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 122 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=120/121 n=4 ec=54/39 lis/c=120/75 les/c/f=121/76/0 sis=122 pruub=14.998687744s) [2] r=-1 lpr=122 pi=[75,122)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 268.048889160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 04:30:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:30:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:48.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:48 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 04:30:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Nov 24 04:30:49 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 04:30:49 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 04:30:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 04:30:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 04:30:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:30:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:30:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:50.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 04:30:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 04:30:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 04:30:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:30:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:51.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:30:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 04:30:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:30:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 04:30:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:30:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:30:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:54.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:30:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:30:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:56.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 04:30:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 04:30:57 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 129 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=129 pruub=10.057298660s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=45'1130 mlcod 0'0 active pruub 274.349548340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:57 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 129 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=129 pruub=10.057257652s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 274.349548340s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:30:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:30:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:30:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:57.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:30:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 04:30:58 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 130 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:30:58 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 130 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=90/91 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 04:30:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:58.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:30:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 04:30:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 04:30:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Nov 24 04:30:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 04:30:59 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 131 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] async=[0] r=0 lpr=130 pi=[90,130)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:30:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:30:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:30:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:30:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:30:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:59.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:00.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 04:31:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 04:31:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 04:31:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132 pruub=15.353184700s) [0] async=[0] r=-1 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 45'1130 active pruub 282.139495850s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:00 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=130/131 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132 pruub=15.352953911s) [0] r=-1 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 282.139495850s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:31:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:31:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 04:31:01 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 04:31:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:01.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:02.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 04:31:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 04:31:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:03.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 04:31:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:05.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 04:31:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 04:31:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Nov 24 04:31:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 04:31:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 04:31:09 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 138 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=138 pruub=10.914296150s) [2] r=-1 lpr=138 pi=[97,138)/1 crt=45'1130 mlcod 0'0 active pruub 286.821411133s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:09 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 138 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=138 pruub=10.914259911s) [2] r=-1 lpr=138 pi=[97,138)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 286.821411133s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:09 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 04:31:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 04:31:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:09.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 04:31:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 04:31:10 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 139 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:10 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 139 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=97/98 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 04:31:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 04:31:11 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 140 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=97/97 les/c/f=98/98/0 sis=139) [2]/[1] async=[2] r=0 lpr=139 pi=[97,139)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:31:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 04:31:12 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 141 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=139/97 les/c/f=140/98/0 sis=141 pruub=14.936669350s) [2] async=[2] r=-1 lpr=141 pi=[97,141)/1 crt=45'1130 mlcod 45'1130 active pruub 293.978240967s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:12 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 141 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=139/140 n=5 ec=54/39 lis/c=139/97 les/c/f=140/98/0 sis=141 pruub=14.936123848s) [2] r=-1 lpr=141 pi=[97,141)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 293.978240967s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 04:31:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:13.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:31:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:31:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:16 np0005533252 python3.9[91264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:31:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 04:31:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 04:31:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 143 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=143 pruub=10.135750771s) [0] r=-1 lpr=143 pi=[75,143)/1 crt=45'1130 mlcod 0'0 active pruub 294.338439941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:17 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 143 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=143 pruub=10.135714531s) [0] r=-1 lpr=143 pi=[75,143)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 294.338439941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 04:31:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 144 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:18 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 144 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=75/76 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 04:31:18 np0005533252 python3.9[91552]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 04:31:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:18 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 04:31:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 04:31:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:19 np0005533252 python3.9[91704]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 04:31:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 04:31:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:31:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:19 np0005533252 python3.9[91882]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:31:19 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 145 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] async=[0] r=0 lpr=144 pi=[75,144)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:31:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 04:31:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146 pruub=15.777785301s) [0] async=[0] r=-1 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 45'1130 active pruub 302.440887451s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=146 pruub=12.317090034s) [0] r=-1 lpr=146 pi=[101,146)/1 crt=45'1130 mlcod 0'0 active pruub 298.980194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=146 pruub=12.317038536s) [0] r=-1 lpr=146 pi=[101,146)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 298.980194092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:20 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=144/145 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146 pruub=15.777721405s) [0] r=-1 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 302.440887451s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:20 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:31:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 04:31:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:20 np0005533252 python3.9[92035]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 04:31:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 04:31:21 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:21 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 147 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=101/102 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 04:31:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 04:31:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 04:31:22 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 148 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] async=[0] r=0 lpr=147 pi=[101,147)/1 crt=45'1130 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.226428) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682226461, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3118, "num_deletes": 251, "total_data_size": 10768040, "memory_usage": 11114304, "flush_reason": "Manual Compaction"}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682249513, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6760016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7636, "largest_seqno": 10749, "table_properties": {"data_size": 6746169, "index_size": 8997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 33429, "raw_average_key_size": 22, "raw_value_size": 6716526, "raw_average_value_size": 4474, "num_data_blocks": 390, "num_entries": 1501, "num_filter_entries": 1501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976580, "oldest_key_time": 1763976580, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 23123 microseconds, and 10391 cpu microseconds.
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.249549) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6760016 bytes OK
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.249568) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251344) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251361) EVENT_LOG_v1 {"time_micros": 1763976682251357, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.251380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 10753031, prev total WAL file size 10753031, number of live WAL files 2.
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.253336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6601KB)], [18(10MB)]
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682253364, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18193153, "oldest_snapshot_seqno": -1}
Nov 24 04:31:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:22 np0005533252 python3.9[92268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4096 keys, 14280326 bytes, temperature: kUnknown
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682319909, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14280326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14247589, "index_size": 21427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104614, "raw_average_key_size": 25, "raw_value_size": 14167170, "raw_average_value_size": 3458, "num_data_blocks": 918, "num_entries": 4096, "num_filter_entries": 4096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.320132) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14280326 bytes
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.321474) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.1 rd, 214.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 10.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(4.8) write-amplify(2.1) OK, records in: 4630, records dropped: 534 output_compression: NoCompression
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.321492) EVENT_LOG_v1 {"time_micros": 1763976682321483, "job": 8, "event": "compaction_finished", "compaction_time_micros": 66624, "compaction_time_cpu_micros": 28213, "output_level": 6, "num_output_files": 1, "total_output_size": 14280326, "num_input_records": 4630, "num_output_records": 4096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682322596, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682324535, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.253260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:31:22.324602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:31:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:23 np0005533252 python3.9[92420]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:31:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 04:31:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149 pruub=15.038232803s) [0] async=[0] r=-1 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 45'1130 active pruub 304.703369141s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 04:31:23 np0005533252 ceph-osd[77497]: osd.1 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=147/148 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149 pruub=15.037920952s) [0] r=-1 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 0'0 unknown NOTIFY pruub 304.703369141s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 04:31:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:23 np0005533252 python3.9[92498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:31:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:23.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:31:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:31:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:31:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:25 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:31:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:25 np0005533252 python3.9[92651]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:31:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:26 np0005533252 python3.9[92806]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 04:31:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:27 np0005533252 python3.9[92959]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 04:31:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:28 np0005533252 python3.9[93113]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:31:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:31:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:31:29 np0005533252 python3.9[93265]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 04:31:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093129 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:31:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:29.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:31:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:31:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:31:30 np0005533252 python3.9[93443]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:31:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:33 np0005533252 python3.9[93597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:31:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:33 np0005533252 python3.9[93749]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:31:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:34 np0005533252 python3.9[93828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:31:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:35 np0005533252 python3.9[93981]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:31:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:35 np0005533252 python3.9[94059]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:31:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:35.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:37 np0005533252 python3.9[94212]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:31:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:39 np0005533252 python3.9[94389]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:31:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:31:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:40.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:40 np0005533252 python3.9[94542]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 04:31:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:41 np0005533252 python3.9[94692]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:31:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:31:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:42 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:31:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:42 np0005533252 python3.9[94845]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:31:42 np0005533252 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 04:31:42 np0005533252 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 04:31:42 np0005533252 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 04:31:42 np0005533252 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 04:31:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:43 np0005533252 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 04:31:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:44 np0005533252 python3.9[95011]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 04:31:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:31:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:31:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:45 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:31:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:47 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:48 np0005533252 python3.9[95165]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:31:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:49 np0005533252 python3.9[95319]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:31:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:49 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:50 np0005533252 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 04:31:50 np0005533252 systemd[1]: session-38.scope: Consumed 1min 404ms CPU time.
Nov 24 04:31:50 np0005533252 systemd-logind[823]: Session 38 logged out. Waiting for processes to exit.
Nov 24 04:31:50 np0005533252 systemd-logind[823]: Removed session 38.
Nov 24 04:31:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:50.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:51 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8001ea0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093151 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:31:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:31:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:31:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:53 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:31:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:31:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:55 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:55 np0005533252 systemd-logind[823]: New session 39 of user zuul.
Nov 24 04:31:55 np0005533252 systemd[1]: Started Session 39 of User zuul.
Nov 24 04:31:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:56 np0005533252 python3.9[95503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:31:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:57 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:31:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:58 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:58 np0005533252 python3.9[95660]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 04:31:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:31:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:31:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:31:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:59 np0005533252 python3.9[95814]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:31:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:31:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:31:59 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:00 np0005533252 python3.9[95923]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 04:32:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:32:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:32:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:01 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:02 np0005533252 python3.9[96080]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:03 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:04.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:04 np0005533252 python3.9[96234]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:32:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:05 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:05 np0005533252 python3.9[96387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:06 np0005533252 python3.9[96541]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 04:32:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:07 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:08 np0005533252 python3.9[96691]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:09 np0005533252 python3.9[96850]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:09 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:10.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcafc003980 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:11 np0005533252 python3.9[97004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:32:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:11 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:12.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:13 np0005533252 python3.9[97292]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 04:32:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:13 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:14 np0005533252 python3.9[97444]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:32:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:14.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:14 np0005533252 python3.9[97598]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:32:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:32:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:15 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc001120 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:17 np0005533252 python3.9[97753]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:17 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf00014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:18.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:19 np0005533252 python3.9[97907]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:32:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:19 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:20.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:20.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:20 np0005533252 python3.9[98087]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 04:32:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093220 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:32:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:21 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:21 np0005533252 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 04:32:21 np0005533252 systemd[1]: session-39.scope: Consumed 17.461s CPU time.
Nov 24 04:32:21 np0005533252 systemd-logind[823]: Session 39 logged out. Waiting for processes to exit.
Nov 24 04:32:21 np0005533252 systemd-logind[823]: Removed session 39.
Nov 24 04:32:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:22.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:22.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb14009b90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:23 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc002050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:25 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:27 np0005533252 systemd-logind[823]: New session 40 of user zuul.
Nov 24 04:32:27 np0005533252 systemd[1]: Started Session 40 of User zuul.
Nov 24 04:32:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4003ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:27 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:28 np0005533252 python3.9[98268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:28.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:29 np0005533252 python3.9[98423]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:32:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4004010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:29 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:32:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:32:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:32:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:30.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:30 np0005533252 python3.9[98684]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:32:30 np0005533252 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 04:32:30 np0005533252 systemd[1]: session-40.scope: Consumed 2.241s CPU time.
Nov 24 04:32:30 np0005533252 systemd-logind[823]: Session 40 logged out. Waiting for processes to exit.
Nov 24 04:32:30 np0005533252 systemd-logind[823]: Removed session 40.
Nov 24 04:32:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcaf0001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:31 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae4004030 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:32.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:32:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:32 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:32:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0036b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:32:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:32:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:33 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:32:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:32:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:32:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:32:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0043c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:35 np0005533252 systemd-logind[823]: New session 41 of user zuul.
Nov 24 04:32:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:35 : epoch 69242567 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:32:35 np0005533252 systemd[1]: Started Session 41 of User zuul.
Nov 24 04:32:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:36.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:36.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:36 np0005533252 python3.9[98883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:37 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:38 np0005533252 python3.9[99037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:38.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:38 np0005533252 systemd[1]: session-19.scope: Deactivated successfully.
Nov 24 04:32:38 np0005533252 systemd[1]: session-19.scope: Consumed 8.290s CPU time.
Nov 24 04:32:38 np0005533252 systemd-logind[823]: Session 19 logged out. Waiting for processes to exit.
Nov 24 04:32:38 np0005533252 systemd-logind[823]: Removed session 19.
Nov 24 04:32:38 np0005533252 python3.9[99194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:32:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:32:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:32:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:39 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:39 np0005533252 python3.9[99328]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:40 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:40 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:32:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:40.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcadc0043c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:41 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcae8003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:41 np0005533252 python3.9[99482]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:32:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093242 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:32:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb08002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:32:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[86455]: 24/11/2025 09:32:43 : epoch 69242567 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcb1400ac90 fd 49 proxy ignored for local
Nov 24 04:32:43 np0005533252 kernel: ganesha.nfsd[91109]: segfault at 50 ip 00007fcbbf51a32e sp 00007fcb8b7fd210 error 4 in libntirpc.so.5.8[7fcbbf4ff000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 04:32:43 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:32:43 np0005533252 systemd[1]: Started Process Core Dump (PID 99679/UID 0).
Nov 24 04:32:43 np0005533252 python3.9[99678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:32:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:44.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:44 np0005533252 python3.9[99833]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:32:44 np0005533252 systemd-coredump[99680]: Process 86459 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 61:#012#0  0x00007fcbbf51a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:32:44 np0005533252 systemd[1]: systemd-coredump@1-99679-0.service: Deactivated successfully.
Nov 24 04:32:44 np0005533252 systemd[1]: systemd-coredump@1-99679-0.service: Consumed 1.148s CPU time.
Nov 24 04:32:44 np0005533252 podman[99864]: 2025-11-24 09:32:44.594103006 +0000 UTC m=+0.029659727 container died f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 04:32:44 np0005533252 systemd[1]: var-lib-containers-storage-overlay-a5a0ea98e4966eef6e9359d2b74c6f9b539ca052e7e8e3709d750180e14075b4-merged.mount: Deactivated successfully.
Nov 24 04:32:44 np0005533252 podman[99864]: 2025-11-24 09:32:44.639706034 +0000 UTC m=+0.075262705 container remove f7b0c338b36b8bdf518e2bc42241679b81bdb5d1de06a8d4b736922ad905c10c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 04:32:44 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:32:44 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:32:44 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.794s CPU time.
Nov 24 04:32:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:32:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:32:45 np0005533252 python3.9[100045]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:32:45 np0005533252 python3.9[100123]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:32:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:46.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:46 np0005533252 python3.9[100276]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:32:47 np0005533252 python3.9[100354]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:32:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:48 np0005533252 python3.9[100507]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:32:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:48 np0005533252 python3.9[100659]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:32:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093249 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:32:49 np0005533252 python3.9[100811]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:32:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:50.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:50 np0005533252 python3.9[100964]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:32:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:51 np0005533252 python3.9[101116]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:32:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:52.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:53 np0005533252 python3.9[101270]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:32:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:54.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:54 np0005533252 python3.9[101425]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:32:54 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 2.
Nov 24 04:32:54 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:32:54 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.794s CPU time.
Nov 24 04:32:55 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:32:55 np0005533252 podman[101568]: 2025-11-24 09:32:55.205348135 +0000 UTC m=+0.034626120 container create 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 04:32:55 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:32:55 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:32:55 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:32:55 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:32:55 np0005533252 podman[101568]: 2025-11-24 09:32:55.257027981 +0000 UTC m=+0.086305896 container init 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:32:55 np0005533252 podman[101568]: 2025-11-24 09:32:55.262027383 +0000 UTC m=+0.091305278 container start 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 04:32:55 np0005533252 bash[101568]: 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133
Nov 24 04:32:55 np0005533252 podman[101568]: 2025-11-24 09:32:55.190605313 +0000 UTC m=+0.019883198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:32:55 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:32:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:32:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:32:55 np0005533252 python3.9[101658]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:32:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:56.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:32:56 np0005533252 python3.9[101833]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:32:57 np0005533252 python3.9[101986]: ansible-service_facts Invoked
Nov 24 04:32:57 np0005533252 network[102003]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:32:57 np0005533252 network[102004]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:32:57 np0005533252 network[102005]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:32:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:32:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:32:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:58.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:32:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:32:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:32:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:00.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:33:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:33:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:33:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:33:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:04 np0005533252 python3.9[102486]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:33:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:07 np0005533252 python3.9[102640]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:33:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:08.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:08 np0005533252 python3.9[102808]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:09 np0005533252 python3.9[102886]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:10 np0005533252 python3.9[103039]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:10 np0005533252 python3.9[103117]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093311 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:33:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:12 np0005533252 python3.9[103270]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:13 np0005533252 python3.9[103423]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:33:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:14 np0005533252 python3.9[103508]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:33:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:33:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:33:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:16 np0005533252 systemd-logind[823]: Session 41 logged out. Waiting for processes to exit.
Nov 24 04:33:16 np0005533252 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 04:33:16 np0005533252 systemd[1]: session-41.scope: Consumed 22.420s CPU time.
Nov 24 04:33:16 np0005533252 systemd-logind[823]: Removed session 41.
Nov 24 04:33:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:16.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:18.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:20.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:20 np0005533252 systemd[83435]: Created slice User Background Tasks Slice.
Nov 24 04:33:20 np0005533252 systemd[83435]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 04:33:20 np0005533252 systemd[83435]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 04:33:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:21 np0005533252 systemd-logind[823]: New session 42 of user zuul.
Nov 24 04:33:21 np0005533252 systemd[1]: Started Session 42 of User zuul.
Nov 24 04:33:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:22 np0005533252 python3.9[103720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:22.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:22 np0005533252 python3.9[103872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:23 np0005533252 python3.9[103950]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:23 np0005533252 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 04:33:23 np0005533252 systemd[1]: session-42.scope: Consumed 1.376s CPU time.
Nov 24 04:33:23 np0005533252 systemd-logind[823]: Session 42 logged out. Waiting for processes to exit.
Nov 24 04:33:23 np0005533252 systemd-logind[823]: Removed session 42.
Nov 24 04:33:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:24.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:24.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:26.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400a060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:29 np0005533252 systemd-logind[823]: New session 43 of user zuul.
Nov 24 04:33:29 np0005533252 systemd[1]: Started Session 43 of User zuul.
Nov 24 04:33:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:30 np0005533252 python3.9[104133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:33:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:33:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:33:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:31 np0005533252 python3.9[104289]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:32 np0005533252 python3.9[104465]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:32.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:32 np0005533252 python3.9[104543]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.8adpmnha recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:34 np0005533252 python3.9[104696]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:34 np0005533252 python3.9[104774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.s94i7n75 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:35 np0005533252 python3.9[104926]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:33:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:36 np0005533252 python3.9[105079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:36 np0005533252 python3.9[105157]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:33:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:37 np0005533252 python3.9[105309]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:37 np0005533252 python3.9[105387]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:33:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:38 np0005533252 python3.9[105540]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:39 np0005533252 python3.9[105694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:39 np0005533252 python3.9[105838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:33:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:40.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:33:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:33:40 np0005533252 python3.9[106031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:33:40 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:33:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:41 np0005533252 python3.9[106109]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:42.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:42 np0005533252 python3.9[106262]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:33:42 np0005533252 systemd[1]: Reloading.
Nov 24 04:33:42 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:33:42 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:33:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:43 np0005533252 python3.9[106452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:44.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:44 np0005533252 python3.9[106531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:33:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:33:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:33:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:33:45 np0005533252 python3.9[106708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:33:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:33:45 np0005533252 python3.9[106786]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:46 np0005533252 python3.9[106939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:33:46 np0005533252 systemd[1]: Reloading.
Nov 24 04:33:46 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:33:46 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:33:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:46 np0005533252 systemd[1]: Starting Create netns directory...
Nov 24 04:33:46 np0005533252 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 04:33:46 np0005533252 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 04:33:46 np0005533252 systemd[1]: Finished Create netns directory.
Nov 24 04:33:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:47 np0005533252 python3.9[107131]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:33:47 np0005533252 network[107149]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:33:47 np0005533252 network[107150]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:33:47 np0005533252 network[107151]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:33:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:48.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:52 np0005533252 python3.9[107415]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:53 np0005533252 python3.9[107493]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:54 np0005533252 python3.9[107646]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:54 np0005533252 python3.9[107798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:55 np0005533252 python3.9[107876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:57 np0005533252 python3.9[108029]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 04:33:57 np0005533252 systemd[1]: Starting Time & Date Service...
Nov 24 04:33:57 np0005533252 systemd[1]: Started Time & Date Service.
Nov 24 04:33:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:33:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:33:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:33:58 np0005533252 python3.9[108186]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:33:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:33:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:33:59 np0005533252 python3.9[108338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:33:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:33:59 np0005533252 python3.9[108416]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:33:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:33:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:00 np0005533252 python3.9[108594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:34:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:34:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:00 np0005533252 python3.9[108672]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0kt2xv6y recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:01 np0005533252 python3.9[108824]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:02 np0005533252 python3.9[108903]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:03 np0005533252 python3.9[109055]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:34:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:04 np0005533252 python3[109209]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 04:34:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:04 np0005533252 python3.9[109361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:05 np0005533252 python3.9[109439]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:06 np0005533252 python3.9[109592]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:06 np0005533252 python3.9[109670]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:07 np0005533252 python3.9[109822]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:08 np0005533252 python3.9[109900]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:08 np0005533252 python3.9[110053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:09 np0005533252 python3.9[110131]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:10 np0005533252 python3.9[110285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:10 np0005533252 python3.9[110363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:11 np0005533252 python3.9[110516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:34:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:12 np0005533252 python3.9[110672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:13 np0005533252 python3.9[110824]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:13 np0005533252 python3.9[110976]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:14.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:14 np0005533252 python3.9[111129]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 04:34:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:34:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:34:15 np0005533252 python3.9[111281]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 04:34:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:16.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:16 np0005533252 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 04:34:16 np0005533252 systemd[1]: session-43.scope: Consumed 27.714s CPU time.
Nov 24 04:34:16 np0005533252 systemd-logind[823]: Session 43 logged out. Waiting for processes to exit.
Nov 24 04:34:16 np0005533252 systemd-logind[823]: Removed session 43.
Nov 24 04:34:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:18.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:20.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:21 np0005533252 systemd-logind[823]: New session 44 of user zuul.
Nov 24 04:34:21 np0005533252 systemd[1]: Started Session 44 of User zuul.
Nov 24 04:34:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:22.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:22 np0005533252 python3.9[111490]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 04:34:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:23 np0005533252 python3.9[111642]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:34:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:24.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:24 np0005533252 python3.9[111797]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 04:34:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093424 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:34:25 np0005533252 python3.9[111949]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.ihbsxqqe follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:34:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:25 np0005533252 python3.9[112074]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.ihbsxqqe mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976864.7154546-103-99059411233140/.source.ihbsxqqe _original_basename=.hatlwqqf follow=False checksum=f51461b6f6171622d95e6dfd4bfc1927ea303d6e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:27 np0005533252 python3.9[112227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:34:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:27 np0005533252 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 04:34:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:27 np0005533252 python3.9[112381]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=#012 create=True mode=0644 path=/tmp/ansible.ihbsxqqe state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:28 np0005533252 python3.9[112534]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ihbsxqqe' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:34:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:29 np0005533252 python3.9[112688]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ihbsxqqe state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:30 np0005533252 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 04:34:30 np0005533252 systemd[1]: session-44.scope: Consumed 4.865s CPU time.
Nov 24 04:34:30 np0005533252 systemd-logind[823]: Session 44 logged out. Waiting for processes to exit.
Nov 24 04:34:30 np0005533252 systemd-logind[823]: Removed session 44.
Nov 24 04:34:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:30.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:34:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:34:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:30.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:32.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:34:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:34.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:35 np0005533252 systemd-logind[823]: New session 45 of user zuul.
Nov 24 04:34:35 np0005533252 systemd[1]: Started Session 45 of User zuul.
Nov 24 04:34:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:36 np0005533252 python3.9[112869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:34:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:36 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:34:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:36 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:34:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:37 np0005533252 python3.9[113026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 04:34:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:38.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:38 np0005533252 python3.9[113181]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:34:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:38.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:39 np0005533252 python3.9[113334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:34:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:34:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:40 np0005533252 python3.9[113488]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:34:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 04:34:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 04:34:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:40.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:41 np0005533252 python3.9[113665]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:34:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:41 np0005533252 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 04:34:41 np0005533252 systemd[1]: session-45.scope: Consumed 3.556s CPU time.
Nov 24 04:34:41 np0005533252 systemd-logind[823]: Session 45 logged out. Waiting for processes to exit.
Nov 24 04:34:41 np0005533252 systemd-logind[823]: Removed session 45.
Nov 24 04:34:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:42 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:34:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093444 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:34:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:34:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:34:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:34:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:34:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:34:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:34:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:47 np0005533252 systemd-logind[823]: New session 46 of user zuul.
Nov 24 04:34:47 np0005533252 systemd[1]: Started Session 46 of User zuul.
Nov 24 04:34:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:48 np0005533252 python3.9[113930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.212263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888212315, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2206, "num_deletes": 252, "total_data_size": 6015776, "memory_usage": 6075992, "flush_reason": "Manual Compaction"}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888230898, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2326024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10754, "largest_seqno": 12955, "table_properties": {"data_size": 2319512, "index_size": 3391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16211, "raw_average_key_size": 20, "raw_value_size": 2305312, "raw_average_value_size": 2870, "num_data_blocks": 151, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976683, "oldest_key_time": 1763976683, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18681 microseconds, and 5198 cpu microseconds.
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.230954) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2326024 bytes OK
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.230975) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233473) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233502) EVENT_LOG_v1 {"time_micros": 1763976888233496, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6005986, prev total WAL file size 6005986, number of live WAL files 2.
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.235184) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2271KB)], [21(13MB)]
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888235251, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16606350, "oldest_snapshot_seqno": -1}
Nov 24 04:34:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:48.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4474 keys, 14776348 bytes, temperature: kUnknown
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888361228, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14776348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14742093, "index_size": 21985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 112722, "raw_average_key_size": 25, "raw_value_size": 14656155, "raw_average_value_size": 3275, "num_data_blocks": 942, "num_entries": 4474, "num_filter_entries": 4474, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.361509) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14776348 bytes
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.362637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.7 rd, 117.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.6 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.4) OK, records in: 4899, records dropped: 425 output_compression: NoCompression
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.362658) EVENT_LOG_v1 {"time_micros": 1763976888362649, "job": 10, "event": "compaction_finished", "compaction_time_micros": 126073, "compaction_time_cpu_micros": 28584, "output_level": 6, "num_output_files": 1, "total_output_size": 14776348, "num_input_records": 4899, "num_output_records": 4474, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888363455, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888366066, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.235084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:34:48.366123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:34:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:49 np0005533252 python3.9[114087]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:34:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:50 np0005533252 python3.9[114172]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 04:34:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:50.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:34:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:34:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:34:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:34:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:34:51 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:34:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:52 np0005533252 python3.9[114349]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:34:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:53 np0005533252 python3.9[114500]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:34:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:54 np0005533252 python3.9[114651]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:34:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:55 np0005533252 python3.9[114801]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:34:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:56 np0005533252 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 04:34:56 np0005533252 systemd[1]: session-46.scope: Consumed 5.690s CPU time.
Nov 24 04:34:56 np0005533252 systemd-logind[823]: Session 46 logged out. Waiting for processes to exit.
Nov 24 04:34:56 np0005533252 systemd-logind[823]: Removed session 46.
Nov 24 04:34:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:34:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:34:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:56.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:34:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:58.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:34:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:34:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:34:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:34:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:34:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.210086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900210144, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 379, "num_deletes": 251, "total_data_size": 444978, "memory_usage": 452760, "flush_reason": "Manual Compaction"}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900234050, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 294369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12960, "largest_seqno": 13334, "table_properties": {"data_size": 292118, "index_size": 415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5388, "raw_average_key_size": 17, "raw_value_size": 287616, "raw_average_value_size": 955, "num_data_blocks": 18, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976888, "oldest_key_time": 1763976888, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 24030 microseconds, and 1895 cpu microseconds.
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.234119) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 294369 bytes OK
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.234141) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235860) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235878) EVENT_LOG_v1 {"time_micros": 1763976900235873, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235898) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 442470, prev total WAL file size 442470, number of live WAL files 2.
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(287KB)], [24(14MB)]
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900236567, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 15070717, "oldest_snapshot_seqno": -1}
Nov 24 04:35:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4262 keys, 12940962 bytes, temperature: kUnknown
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900396578, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12940962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12909894, "index_size": 19310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 109225, "raw_average_key_size": 25, "raw_value_size": 12829393, "raw_average_value_size": 3010, "num_data_blocks": 815, "num_entries": 4262, "num_filter_entries": 4262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396817) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12940962 bytes
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.398553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.2 rd, 80.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(95.2) write-amplify(44.0) OK, records in: 4775, records dropped: 513 output_compression: NoCompression
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.398572) EVENT_LOG_v1 {"time_micros": 1763976900398563, "job": 12, "event": "compaction_finished", "compaction_time_micros": 160046, "compaction_time_cpu_micros": 31457, "output_level": 6, "num_output_files": 1, "total_output_size": 12940962, "num_input_records": 4775, "num_output_records": 4262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900398746, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900401031, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:35:00.401153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:35:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:01 np0005533252 systemd-logind[823]: New session 47 of user zuul.
Nov 24 04:35:01 np0005533252 systemd[1]: Started Session 47 of User zuul.
Nov 24 04:35:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:02.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:02 np0005533252 python3.9[115009]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:35:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea98003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:04.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:04 np0005533252 python3.9[115166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:04 np0005533252 python3.9[115318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:05 np0005533252 python3.9[115470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:05 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:06.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:06 np0005533252 python3.9[115594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976905.1023533-154-117669381383635/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1aed28cbd157b82f7069a716a80af3c0e21ff713 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:07 np0005533252 python3.9[115746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:07 np0005533252 python3.9[115869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976906.622611-154-42050091528181/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=3228e523f8b01d6a11882d8cc1d2d959030dab43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:07 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093508 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:35:08 np0005533252 python3.9[116022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:08.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:08 np0005533252 python3.9[116145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976907.7451613-154-264373962335238/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=d8cbf8f331cdf03d2c25f53533d79c8d0bfed30c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400add0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:09 np0005533252 python3.9[116297]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:09 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:10 np0005533252 python3.9[116450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:10.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:10 np0005533252 python3.9[116602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:11 np0005533252 python3.9[116725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976910.290428-325-48718569917240/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=51a6c591c203944590268a477cdb8f6d7c46652a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa400add0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:11 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:11 np0005533252 python3.9[116877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:12.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:12 np0005533252 python3.9[117001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976911.4913163-325-62791605727008/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:13 np0005533252 python3.9[117153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:13 np0005533252 python3.9[117277]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976912.6152792-325-3391753012308/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=293490359390b00694df182b7f282079077f474f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:13 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:14 np0005533252 python3.9[117431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:14 np0005533252 python3.9[117583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:15 np0005533252 python3.9[117735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:35:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:35:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:15 np0005533252 python3.9[117858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976914.9394274-494-56946476035865/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=bc8679c076a79311d7c86b9b1a6f9b2a996ee747 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:15 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:16.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:16 np0005533252 python3.9[118011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:16 np0005533252 python3.9[118134]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976915.983199-494-144141505727691/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:35:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:17 np0005533252 python3.9[118286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:17 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:18 np0005533252 python3.9[118410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976917.1200743-494-130064663828823/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=14c307e0f7068641dd695e1233929e25344f95a3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:19 np0005533252 python3.9[118562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:19 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:20 np0005533252 python3.9[118715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:20 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:35:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:20 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:35:20 np0005533252 python3.9[118863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976919.5814614-676-96888622235117/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:21 np0005533252 python3.9[119015]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:21 np0005533252 python3.9[119167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:21 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:22.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:22 np0005533252 python3.9[119291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976921.445923-752-38312562008629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:22 np0005533252 python3.9[119443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:35:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:23 np0005533252 python3.9[119595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:23 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:24 np0005533252 python3.9[119719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976923.1143703-818-250430191272835/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:24.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:24 np0005533252 python3.9[119871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:25 np0005533252 python3.9[120023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:25 np0005533252 python3.9[120146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976924.847026-884-129749894765204/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:25 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea840023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:26.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:26 np0005533252 python3.9[120299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:27 np0005533252 python3.9[120451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:27 np0005533252 python3.9[120574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976926.6229327-949-69149988085739/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:27 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:28 np0005533252 python3.9[120728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:28.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:28 np0005533252 python3.9[120880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:29 np0005533252 python3.9[121003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976928.386976-1017-34809651986324/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:29 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093530 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:35:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:30.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:35:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:35:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:31 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:32.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:32.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:33 np0005533252 systemd[1]: session-47.scope: Deactivated successfully.
Nov 24 04:35:33 np0005533252 systemd[1]: session-47.scope: Consumed 21.621s CPU time.
Nov 24 04:35:33 np0005533252 systemd-logind[823]: Session 47 logged out. Waiting for processes to exit.
Nov 24 04:35:33 np0005533252 systemd-logind[823]: Removed session 47.
Nov 24 04:35:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:33 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:35 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:37 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:38 np0005533252 systemd-logind[823]: New session 48 of user zuul.
Nov 24 04:35:38 np0005533252 systemd[1]: Started Session 48 of User zuul.
Nov 24 04:35:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:39 np0005533252 python3.9[121188]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:39 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:40 np0005533252 python3.9[121341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:41 np0005533252 python3.9[121489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976939.957228-63-105236783014095/.source.conf _original_basename=ceph.conf follow=False checksum=35be1475912cb94f172c67eb64af3d903820f5fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:41 np0005533252 python3.9[121641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:35:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:41 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:42.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:42 np0005533252 python3.9[121765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976941.3996968-63-262693485155571/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5b68b38eb199b40419da711d3119a1cd74c89fee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:35:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:43 np0005533252 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 04:35:43 np0005533252 systemd[1]: session-48.scope: Consumed 2.539s CPU time.
Nov 24 04:35:43 np0005533252 systemd-logind[823]: Session 48 logged out. Waiting for processes to exit.
Nov 24 04:35:43 np0005533252 systemd-logind[823]: Removed session 48.
Nov 24 04:35:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea68001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:43 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea84004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:35:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:35:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:45 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:35:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:35:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:46.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:47 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea70001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:48.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:48 np0005533252 systemd-logind[823]: New session 49 of user zuul.
Nov 24 04:35:48 np0005533252 systemd[1]: Started Session 49 of User zuul.
Nov 24 04:35:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:49 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:49 np0005533252 python3.9[121948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:35:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:51 np0005533252 python3.9[122130]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:35:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:35:51 np0005533252 python3.9[122338]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:35:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:51 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:35:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:52.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:35:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:52 np0005533252 python3.9[122489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:35:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea78000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:53 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:53 np0005533252 python3.9[122641]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 04:35:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:54.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:55 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feaa4008e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:35:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:56.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:35:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:56 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 04:35:57 np0005533252 python3.9[122799]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:35:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:35:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:35:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:35:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:57 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:58 np0005533252 python3.9[122909]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:35:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:35:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:35:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:35:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:35:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:35:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:35:59 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:00.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:36:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:36:00 np0005533252 python3.9[123063]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:36:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:00.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea780016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:01 np0005533252 python3[123243]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 04:36:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:01 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea80004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:02 np0005533252 python3.9[123396]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:03 np0005533252 kernel: ganesha.nfsd[121791]: segfault at 50 ip 00007feb4d57f32e sp 00007feb1d7f9210 error 4 in libntirpc.so.5.8[7feb4d564000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 24 04:36:03 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:36:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[101610]: 24/11/2025 09:36:03 : epoch 69242647 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fea980048e0 fd 38 proxy ignored for local
Nov 24 04:36:03 np0005533252 systemd[1]: Started Process Core Dump (PID 123549/UID 0).
Nov 24 04:36:03 np0005533252 python3.9[123548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:04 np0005533252 python3.9[123629]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:04 np0005533252 systemd-coredump[123550]: Process 101625 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007feb4d57f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:36:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:04.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:04 np0005533252 systemd[1]: systemd-coredump@2-123549-0.service: Deactivated successfully.
Nov 24 04:36:04 np0005533252 systemd[1]: systemd-coredump@2-123549-0.service: Consumed 1.260s CPU time.
Nov 24 04:36:04 np0005533252 podman[123786]: 2025-11-24 09:36:04.898629403 +0000 UTC m=+0.032519672 container died 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 04:36:04 np0005533252 systemd[1]: var-lib-containers-storage-overlay-b08c191ca12641a1e68bb6e456ff7064fde48d688012042f510575c5e2cb0c34-merged.mount: Deactivated successfully.
Nov 24 04:36:04 np0005533252 podman[123786]: 2025-11-24 09:36:04.946860902 +0000 UTC m=+0.080751161 container remove 89e9e06f9211bc7e046f65662fa13ddb2cb8e39af97781a83cf33fe42fb7a133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 24 04:36:04 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:36:05 np0005533252 python3.9[123782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:05 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:36:05 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.576s CPU time.
Nov 24 04:36:05 np0005533252 python3.9[123906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ordhq9gz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:06 np0005533252 python3.9[124059]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:06 np0005533252 python3.9[124137]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:07 np0005533252 python3.9[124289]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:08.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:08 np0005533252 python3[124443]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 04:36:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:08.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093609 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:36:09 np0005533252 python3.9[124595]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:10 np0005533252 python3.9[124721]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976969.0022404-432-163909908000755/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:10.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:11 np0005533252 python3.9[124873]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:11 np0005533252 python3.9[124998]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976970.5721831-477-220323460297226/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:12 np0005533252 python3.9[125151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:12.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:13 np0005533252 python3.9[125276]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976972.146237-522-46863788175343/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:13 np0005533252 python3.9[125428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:14.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:14 np0005533252 python3.9[125554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976973.492249-567-138280405723606/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:15 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 3.
Nov 24 04:36:15 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:36:15 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.576s CPU time.
Nov 24 04:36:15 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:36:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:36:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:36:15 np0005533252 podman[125753]: 2025-11-24 09:36:15.476946809 +0000 UTC m=+0.045802440 container create 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:36:15 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:36:15 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:36:15 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:36:15 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:36:15 np0005533252 podman[125753]: 2025-11-24 09:36:15.53821913 +0000 UTC m=+0.107074761 container init 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:36:15 np0005533252 podman[125753]: 2025-11-24 09:36:15.543047827 +0000 UTC m=+0.111903458 container start 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Nov 24 04:36:15 np0005533252 bash[125753]: 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf
Nov 24 04:36:15 np0005533252 podman[125753]: 2025-11-24 09:36:15.454436164 +0000 UTC m=+0.023291825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:36:15 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:36:15 np0005533252 python3.9[125743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:36:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:36:16 np0005533252 python3.9[125935]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976975.0323694-612-222204463644646/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:16.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:17 np0005533252 python3.9[126087]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:17 np0005533252 python3.9[126239]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:18.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:19 np0005533252 python3.9[126395]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:19 np0005533252 python3.9[126547]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:20 np0005533252 python3.9[126726]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:36:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:36:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:36:21 np0005533252 python3.9[126880]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:22.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:22 np0005533252 python3.9[127036]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:24 np0005533252 python3.9[127187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:36:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:25 np0005533252 python3.9[127340]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:25 np0005533252 ovs-vsctl[127341]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 04:36:26 np0005533252 python3.9[127494]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:26.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:27 np0005533252 python3.9[127649]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:36:27 np0005533252 ovs-vsctl[127650]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:36:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:28 np0005533252 python3.9[127812]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:36:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:28 np0005533252 python3.9[127970]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:29 np0005533252 python3.9[128122]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:30 np0005533252 python3.9[128201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:36:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:36:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:30.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:30 np0005533252 python3.9[128353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 04:36:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 04:36:31 np0005533252 python3.9[128431]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093631 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:36:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:32 np0005533252 python3.9[128584]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:33 np0005533252 python3.9[128736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:36:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 19.84 MB, 0.03 MB/s#012Interval WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 24 04:36:33 np0005533252 python3.9[128814]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:34.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:34 np0005533252 python3.9[128967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:35 np0005533252 python3.9[129045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:36 np0005533252 python3.9[129198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:36:36 np0005533252 systemd[1]: Reloading.
Nov 24 04:36:36 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:36:36 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:36:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:36.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:37 np0005533252 python3.9[129387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:37 np0005533252 python3.9[129465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:38 np0005533252 python3.9[129618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:38.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:39 np0005533252 python3.9[129696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:40 np0005533252 python3.9[129848]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:36:40 np0005533252 systemd[1]: Reloading.
Nov 24 04:36:40 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:36:40 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:36:40 np0005533252 systemd[1]: Starting Create netns directory...
Nov 24 04:36:40 np0005533252 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 04:36:40 np0005533252 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 04:36:40 np0005533252 systemd[1]: Finished Create netns directory.
Nov 24 04:36:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:41 np0005533252 python3.9[130067]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:42 np0005533252 python3.9[130220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:42 np0005533252 python3.9[130343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977001.8409321-1365-213441516940144/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0001ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:44 np0005533252 python3.9[130496]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:36:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093644 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:36:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:44 np0005533252 python3.9[130648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:36:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:36:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:36:45 np0005533252 python3.9[130771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977004.4070644-1440-234980861693327/.source.json _original_basename=.bik360rx follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:46 np0005533252 python3.9[130924]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:36:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:46.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:48.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:49 np0005533252 python3.9[131352]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 04:36:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:50 np0005533252 python3.9[131505]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 04:36:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:36:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:36:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:50.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:51 np0005533252 python3.9[131657]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 04:36:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:52.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:53 np0005533252 python3[131838]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 04:36:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:36:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:54.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:54.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f8003c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:36:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:36:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:36:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:56.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:36:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:36:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:58.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:36:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:36:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:36:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:36:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:36:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:58.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:36:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:36:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:36:59 np0005533252 podman[131852]: 2025-11-24 09:36:59.234155805 +0000 UTC m=+5.911098548 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 04:36:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:36:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:36:59 np0005533252 podman[132042]: 2025-11-24 09:36:59.376266141 +0000 UTC m=+0.050338088 container create c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 04:36:59 np0005533252 podman[132042]: 2025-11-24 09:36:59.350323815 +0000 UTC m=+0.024395762 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 04:36:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:59 np0005533252 python3[131838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 04:36:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:36:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:36:59 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:36:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:36:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:37:00 np0005533252 python3.9[132313]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:37:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:37:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:00.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:01 np0005533252 python3.9[132492]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404001050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:01 np0005533252 python3.9[132568]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:37:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:01 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:02 np0005533252 python3.9[132720]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977021.5781636-1704-104276392668209/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:02.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:37:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2467 writes, 14K keys, 2467 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2467 writes, 2467 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2467 writes, 14K keys, 2467 commit groups, 1.0 writes per commit group, ingest: 38.73 MB, 0.06 MB/s#012Interval WAL: 2467 writes, 2467 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    149.3      0.14              0.04         6    0.023       0      0       0.0       0.0#012  L6      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.0    131.9    115.8      0.54              0.13         5    0.107     21K   2256       0.0       0.0#012 Sum      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    104.5    122.7      0.68              0.18        11    0.062     21K   2256       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    104.8    123.0      0.68              0.18        10    0.068     21K   2256       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    131.9    115.8      0.54              0.13         5    0.107     21K   2256       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    151.2      0.14              0.04         5    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 2.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(159,2.00 MB,0.659315%) FilterBlock(11,69.80 KB,0.0224214%) IndexBlock(11,139.67 KB,0.0448679%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 04:37:02 np0005533252 python3.9[132796]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:37:02 np0005533252 systemd[1]: Reloading.
Nov 24 04:37:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:02 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:37:02 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:37:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:02.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:03 np0005533252 python3.9[132907]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:37:03 np0005533252 systemd[1]: Reloading.
Nov 24 04:37:03 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:37:03 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:37:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:03 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404001b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:04 np0005533252 systemd[1]: Starting ovn_controller container...
Nov 24 04:37:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:04.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:04 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:37:04 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fde3f3c15c720ba18b3ff150c91bff40d0b825e6318befc8f6c0df3d9f4c75/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 04:37:04 np0005533252 systemd[1]: Started /usr/bin/podman healthcheck run c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2.
Nov 24 04:37:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:04.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:04 np0005533252 podman[132950]: 2025-11-24 09:37:04.903522754 +0000 UTC m=+0.832150450 container init c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 04:37:04 np0005533252 ovn_controller[132966]: + sudo -E kolla_set_configs
Nov 24 04:37:04 np0005533252 podman[132950]: 2025-11-24 09:37:04.927853252 +0000 UTC m=+0.856480948 container start c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 04:37:04 np0005533252 edpm-start-podman-container[132950]: ovn_controller
Nov 24 04:37:04 np0005533252 systemd[1]: Created slice User Slice of UID 0.
Nov 24 04:37:04 np0005533252 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 04:37:04 np0005533252 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 04:37:04 np0005533252 systemd[1]: Starting User Manager for UID 0...
Nov 24 04:37:05 np0005533252 edpm-start-podman-container[132949]: Creating additional drop-in dependency for "ovn_controller" (c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2)
Nov 24 04:37:05 np0005533252 podman[132973]: 2025-11-24 09:37:05.021437495 +0000 UTC m=+0.080889657 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 04:37:05 np0005533252 systemd[1]: c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2-479cdedca759de6a.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 04:37:05 np0005533252 systemd[1]: c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2-479cdedca759de6a.service: Failed with result 'exit-code'.
Nov 24 04:37:05 np0005533252 systemd[1]: Reloading.
Nov 24 04:37:05 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:37:05 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:37:05 np0005533252 systemd[133000]: Queued start job for default target Main User Target.
Nov 24 04:37:05 np0005533252 systemd[133000]: Created slice User Application Slice.
Nov 24 04:37:05 np0005533252 systemd[133000]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 04:37:05 np0005533252 systemd[133000]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 04:37:05 np0005533252 systemd[133000]: Reached target Paths.
Nov 24 04:37:05 np0005533252 systemd[133000]: Reached target Timers.
Nov 24 04:37:05 np0005533252 systemd[133000]: Starting D-Bus User Message Bus Socket...
Nov 24 04:37:05 np0005533252 systemd[133000]: Starting Create User's Volatile Files and Directories...
Nov 24 04:37:05 np0005533252 systemd[133000]: Finished Create User's Volatile Files and Directories.
Nov 24 04:37:05 np0005533252 systemd[133000]: Listening on D-Bus User Message Bus Socket.
Nov 24 04:37:05 np0005533252 systemd[133000]: Reached target Sockets.
Nov 24 04:37:05 np0005533252 systemd[133000]: Reached target Basic System.
Nov 24 04:37:05 np0005533252 systemd[133000]: Reached target Main User Target.
Nov 24 04:37:05 np0005533252 systemd[133000]: Startup finished in 158ms.
Nov 24 04:37:05 np0005533252 systemd[1]: Started User Manager for UID 0.
Nov 24 04:37:05 np0005533252 systemd[1]: Started ovn_controller container.
Nov 24 04:37:05 np0005533252 systemd[1]: Started Session c1 of User root.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: INFO:__main__:Validating config file
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: INFO:__main__:Writing out command to execute
Nov 24 04:37:05 np0005533252 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: ++ cat /run_command
Nov 24 04:37:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + ARGS=
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + sudo kolla_copy_cacerts
Nov 24 04:37:05 np0005533252 systemd[1]: Started Session c2 of User root.
Nov 24 04:37:05 np0005533252 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + [[ ! -n '' ]]
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + . kolla_extend_start
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + umask 0022
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4495] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4505] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4520] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4525] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4530] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 04:37:05 np0005533252 kernel: br-int: entered promiscuous mode
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.4726] manager: (ovn-f6640d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 04:37:05 np0005533252 systemd-udevd[133102]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:37:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4000ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:05 np0005533252 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.5066] device (genev_sys_6081): carrier: link connected
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.5070] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 04:37:05 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:05Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 04:37:05 np0005533252 NetworkManager[48870]: <info>  [1763977025.6515] manager: (ovn-fae732-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 24 04:37:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:05 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093706 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:37:06 np0005533252 NetworkManager[48870]: <info>  [1763977026.4541] manager: (ovn-feb242-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 24 04:37:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:06.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:06 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:37:06 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:37:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:06 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:06 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:37:07 np0005533252 python3.9[133234]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:37:07 np0005533252 ovs-vsctl[133258]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 04:37:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:07 np0005533252 python3.9[133412]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:37:07 np0005533252 ovs-vsctl[133414]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 04:37:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:07 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:08.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:09 np0005533252 python3.9[133568]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:37:09 np0005533252 ovs-vsctl[133569]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 04:37:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:09 np0005533252 systemd[1]: session-49.scope: Deactivated successfully.
Nov 24 04:37:09 np0005533252 systemd[1]: session-49.scope: Consumed 57.703s CPU time.
Nov 24 04:37:09 np0005533252 systemd-logind[823]: Session 49 logged out. Waiting for processes to exit.
Nov 24 04:37:09 np0005533252 systemd-logind[823]: Removed session 49.
Nov 24 04:37:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:09 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:11 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4404002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:12.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:13 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:14.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:37:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:37:15 np0005533252 systemd[1]: Stopping User Manager for UID 0...
Nov 24 04:37:15 np0005533252 systemd[133000]: Activating special unit Exit the Session...
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped target Main User Target.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped target Basic System.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped target Paths.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped target Sockets.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped target Timers.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 04:37:15 np0005533252 systemd[133000]: Closed D-Bus User Message Bus Socket.
Nov 24 04:37:15 np0005533252 systemd[133000]: Stopped Create User's Volatile Files and Directories.
Nov 24 04:37:15 np0005533252 systemd[133000]: Removed slice User Application Slice.
Nov 24 04:37:15 np0005533252 systemd[133000]: Reached target Shutdown.
Nov 24 04:37:15 np0005533252 systemd[133000]: Finished Exit the Session.
Nov 24 04:37:15 np0005533252 systemd[133000]: Reached target Exit the Session.
Nov 24 04:37:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:15 np0005533252 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 04:37:15 np0005533252 systemd[1]: Stopped User Manager for UID 0.
Nov 24 04:37:15 np0005533252 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 04:37:15 np0005533252 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 04:37:15 np0005533252 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 04:37:15 np0005533252 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 04:37:15 np0005533252 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 04:37:15 np0005533252 systemd-logind[823]: New session 51 of user zuul.
Nov 24 04:37:15 np0005533252 systemd[1]: Started Session 51 of User zuul.
Nov 24 04:37:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:15 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:16.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:16 np0005533252 python3.9[133753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:37:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:17 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:18 np0005533252 python3.9[133910]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:18.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:18 np0005533252 python3.9[134062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:18.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:19 np0005533252 python3.9[134214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4414009f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:19 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4001a00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:20 np0005533252 python3.9[134367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:20.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:20 np0005533252 python3.9[134519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:21 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:22 np0005533252 python3.9[134694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:37:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:23 np0005533252 python3.9[134847]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 04:37:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:23 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:23 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:24 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:24 np0005533252 python3.9[134999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:24.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:25 np0005533252 python3.9[135120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977044.1042645-219-2988908468674/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:25 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:25 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:26 np0005533252 python3.9[135271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:26 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:26 np0005533252 python3.9[135392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977045.588016-264-87182174839306/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:26.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:27 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:27 np0005533252 python3.9[135544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:37:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:28 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43e4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:28 np0005533252 python3.9[135629]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:37:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:29 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:30 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:37:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:37:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:31 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:31 np0005533252 python3.9[135784]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:37:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:32 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:32 np0005533252 python3.9[135938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:32.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:33 np0005533252 python3.9[136059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977052.159136-375-144306091092847/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:33 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:33 np0005533252 python3.9[136209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:34 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:34 np0005533252 python3.9[136331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977053.3069584-375-82454977258136/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:34.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:35Z|00025|memory|INFO|17024 kB peak resident set size after 29.9 seconds
Nov 24 04:37:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:37:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 24 04:37:35 np0005533252 podman[136356]: 2025-11-24 09:37:35.355419969 +0000 UTC m=+0.087947820 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 04:37:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:35 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:35 np0005533252 python3.9[136509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:36 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:36 np0005533252 python3.9[136631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977055.5285463-507-241546170037071/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:37 np0005533252 python3.9[136781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:37 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:37 np0005533252 python3.9[136902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977056.6747687-507-274874583311384/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:38 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:38 np0005533252 python3.9[137053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:37:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f00025d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:39 np0005533252 python3.9[137207]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:39 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:40 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:40 np0005533252 python3.9[137360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:40 np0005533252 python3.9[137438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:41 np0005533252 python3.9[137615]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:41 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:41 np0005533252 python3.9[137693]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:42 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:42.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:42 np0005533252 python3.9[137846]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:42.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:43 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:43 np0005533252 python3.9[137998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:43 np0005533252 python3.9[138076]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:44 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:44 np0005533252 python3.9[138229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:45 np0005533252 python3.9[138307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:37:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:37:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:45 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:46 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:46 np0005533252 python3.9[138460]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:37:46 np0005533252 systemd[1]: Reloading.
Nov 24 04:37:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:46 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:37:46 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:37:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:46.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:47 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:47 np0005533252 python3.9[138650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:48 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:48 np0005533252 python3.9[138729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:48.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:48 np0005533252 python3.9[138881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:37:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:48.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:37:49 np0005533252 python3.9[138959]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:49 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:50 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:50 np0005533252 python3.9[139112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:37:50 np0005533252 systemd[1]: Reloading.
Nov 24 04:37:50 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:37:50 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:37:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:50 np0005533252 systemd[1]: Starting Create netns directory...
Nov 24 04:37:50 np0005533252 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 04:37:50 np0005533252 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 04:37:50 np0005533252 systemd[1]: Finished Create netns directory.
Nov 24 04:37:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 24 04:37:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 24 04:37:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 24 04:37:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 24 04:37:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f441400a470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:51 np0005533252 radosgw[81417]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 24 04:37:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:51 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:51 np0005533252 python3.9[139307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:52 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:52 np0005533252 python3.9[139461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:53 np0005533252 python3.9[139584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977071.9838781-960-232670460355376/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:53 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44040044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:54 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:54 np0005533252 python3.9[139737]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:37:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:54 np0005533252 python3.9[139889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:37:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:54.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:55 np0005533252 python3.9[140012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977074.4968886-1035-257021208689875/.source.json _original_basename=.8qwzfune follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:55 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:56 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43ec003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:37:56 np0005533252 python3.9[140165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:37:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:37:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:37:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[125768]: 24/11/2025 09:37:57 : epoch 6924270f : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f43f0003530 fd 38 proxy ignored for local
Nov 24 04:37:57 np0005533252 kernel: ganesha.nfsd[135631]: segfault at 50 ip 00007f44c079232e sp 00007f447cff8210 error 4 in libntirpc.so.5.8[7f44c0777000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 04:37:57 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:37:57 np0005533252 systemd[1]: Started Process Core Dump (PID 140411/UID 0).
Nov 24 04:37:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:37:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:37:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:37:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:58.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:37:59 np0005533252 systemd-coredump[140413]: Process 125772 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007f44c079232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:37:59 np0005533252 systemd[1]: systemd-coredump@3-140411-0.service: Deactivated successfully.
Nov 24 04:37:59 np0005533252 systemd[1]: systemd-coredump@3-140411-0.service: Consumed 1.225s CPU time.
Nov 24 04:37:59 np0005533252 python3.9[140595]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 04:37:59 np0005533252 podman[140601]: 2025-11-24 09:37:59.361282502 +0000 UTC m=+0.028779527 container died 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:37:59 np0005533252 systemd[1]: var-lib-containers-storage-overlay-e32851b5a2e93b9365e0cf37abccf059ff991f45f064509199c8a8139824910b-merged.mount: Deactivated successfully.
Nov 24 04:37:59 np0005533252 podman[140601]: 2025-11-24 09:37:59.402326316 +0000 UTC m=+0.069823321 container remove 6cabd88b91f4fd1437b7ff52ddca5cb05345de4c636d0778a8357b125db16eaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 04:37:59 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:37:59 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:37:59 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.620s CPU time.
Nov 24 04:38:00 np0005533252 python3.9[140798]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 04:38:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:38:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:38:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:38:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:38:01 np0005533252 python3.9[140950]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 04:38:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:02.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:03 np0005533252 python3[141155]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 04:38:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093803 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:38:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:04.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:38:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:38:06 np0005533252 podman[141218]: 2025-11-24 09:38:06.769667774 +0000 UTC m=+0.498913465 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 04:38:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:07.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:09.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:09 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 4.
Nov 24 04:38:09 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:38:09 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.620s CPU time.
Nov 24 04:38:09 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:38:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:38:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:11.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:38:11 np0005533252 podman[141434]: 2025-11-24 09:38:11.901472511 +0000 UTC m=+0.043313749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:38:12 np0005533252 podman[141434]: 2025-11-24 09:38:12.169217671 +0000 UTC m=+0.311058889 container create 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:38:12 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:12 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:12 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:12 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:12 np0005533252 podman[141434]: 2025-11-24 09:38:12.248975071 +0000 UTC m=+0.390816289 container init 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:38:12 np0005533252 podman[141434]: 2025-11-24 09:38:12.25469336 +0000 UTC m=+0.396534578 container start 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 04:38:12 np0005533252 bash[141434]: 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:38:12 np0005533252 podman[141168]: 2025-11-24 09:38:12.264658591 +0000 UTC m=+8.879166169 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:38:12 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:38:12 np0005533252 podman[141492]: 2025-11-24 09:38:12.41874347 +0000 UTC m=+0.058118717 container create 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:38:12 np0005533252 podman[141492]: 2025-11-24 09:38:12.381810216 +0000 UTC m=+0.021185493 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:38:12 np0005533252 python3[141155]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:38:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:12 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:38:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:12.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:38:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:13.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:13 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:38:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:38:13 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:38:13 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:38:13 np0005533252 python3.9[141704]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:38:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:14 np0005533252 python3.9[141859]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:15 np0005533252 python3.9[141935]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:38:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:38:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:38:15 np0005533252 python3.9[142086]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977095.215669-1299-3560948753067/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:16 np0005533252 python3.9[142163]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:38:16 np0005533252 systemd[1]: Reloading.
Nov 24 04:38:16 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:38:16 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:38:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:17.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:38:17 np0005533252 python3.9[142274]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:17 np0005533252 systemd[1]: Reloading.
Nov 24 04:38:17 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:38:17 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:38:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:18 np0005533252 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 04:38:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:38:18 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:38:18 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f713fb5a8a62c677df2ae45d949700fe56a36660790c930a7d72c59f7bc7c3/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:18 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f713fb5a8a62c677df2ae45d949700fe56a36660790c930a7d72c59f7bc7c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:38:18 np0005533252 systemd[1]: Started /usr/bin/podman healthcheck run 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b.
Nov 24 04:38:18 np0005533252 podman[142316]: 2025-11-24 09:38:18.160813825 +0000 UTC m=+0.109041830 container init 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + sudo -E kolla_set_configs
Nov 24 04:38:18 np0005533252 podman[142316]: 2025-11-24 09:38:18.182501731 +0000 UTC m=+0.130729716 container start 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 04:38:18 np0005533252 edpm-start-podman-container[142316]: ovn_metadata_agent
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Validating config file
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Copying service configuration files
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Writing out command to execute
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: ++ cat /run_command
Nov 24 04:38:18 np0005533252 edpm-start-podman-container[142315]: Creating additional drop-in dependency for "ovn_metadata_agent" (6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b)
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + CMD=neutron-ovn-metadata-agent
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + ARGS=
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + sudo kolla_copy_cacerts
Nov 24 04:38:18 np0005533252 podman[142338]: 2025-11-24 09:38:18.246149641 +0000 UTC m=+0.054068929 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + [[ ! -n '' ]]
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + . kolla_extend_start
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + umask 0022
Nov 24 04:38:18 np0005533252 ovn_metadata_agent[142331]: + exec neutron-ovn-metadata-agent
Nov 24 04:38:18 np0005533252 systemd[1]: Reloading.
Nov 24 04:38:18 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:38:18 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:38:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:18 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:38:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:18 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:38:18 np0005533252 systemd[1]: Started ovn_metadata_agent container.
Nov 24 04:38:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:19.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:19 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:38:19 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:38:19 np0005533252 systemd[1]: session-51.scope: Deactivated successfully.
Nov 24 04:38:19 np0005533252 systemd[1]: session-51.scope: Consumed 55.077s CPU time.
Nov 24 04:38:19 np0005533252 systemd-logind[823]: Session 51 logged out. Waiting for processes to exit.
Nov 24 04:38:19 np0005533252 systemd-logind[823]: Removed session 51.
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.002 142336 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.003 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.004 142336 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.005 142336 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.006 142336 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.007 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.008 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.009 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.010 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.011 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.012 142336 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.013 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.014 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.015 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.016 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.017 142336 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.018 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.019 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.020 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.021 142336 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.022 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.023 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.024 142336 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.025 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.026 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.027 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.028 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.029 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.030 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.031 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.032 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.033 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.034 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.035 142336 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.043 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.044 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.044 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.056 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 803b139a-7fca-4549-8597-645cf677225d (UUID: 803b139a-7fca-4549-8597-645cf677225d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.077 142336 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.080 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.087 142336 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.094 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '803b139a-7fca-4549-8597-645cf677225d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], external_ids={}, name=803b139a-7fca-4549-8597-645cf677225d, nb_cfg_timestamp=1763977033475, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.095 142336 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5c78675f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.096 142336 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.100 142336 DEBUG oslo_service.service [-] Started child 142471 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.103 142471 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-955359'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.103 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp0o2q9_m5/privsep.sock']#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.123 142471 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.126 142471 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.132 142471 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.139 142471 INFO eventlet.wsgi.server [-] (142471) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 24 04:38:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:20 np0005533252 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.758 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.758 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0o2q9_m5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.640 142476 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.644 142476 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.648 142476 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.649 142476 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142476#033[00m
Nov 24 04:38:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:20.761 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[90395d3a-65b0-4b47-a7a7-554ba30cd0e6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:38:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:21.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.257 142476 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.789 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a2d89e-80db-49e4-858c-5cdcc567210a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.791 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, column=external_ids, values=({'neutron:ovn-metadata-id': 'f0c01ca3-a3f1-5efc-8a96-3a1db00f23b0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.813 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.819 142336 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.820 142336 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.821 142336 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.822 142336 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.823 142336 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.824 142336 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.825 142336 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.826 142336 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.827 142336 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.828 142336 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.829 142336 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.830 142336 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.831 142336 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.832 142336 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.833 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.834 142336 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.835 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.836 142336 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.837 142336 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.838 142336 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.839 142336 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.840 142336 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.841 142336 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.842 142336 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.843 142336 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.844 142336 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.845 142336 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.846 142336 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.847 142336 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.848 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.849 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.850 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.851 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:38:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:38:21.852 142336 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 04:38:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:23.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093824 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:38:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:24 : epoch 69242784 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:38:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:25.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:25 np0005533252 systemd-logind[823]: New session 52 of user zuul.
Nov 24 04:38:25 np0005533252 systemd[1]: Started Session 52 of User zuul.
Nov 24 04:38:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:25 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae44000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:25 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae38001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:26 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:26 np0005533252 python3.9[142678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:38:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:27.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:27 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093827 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:38:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:27 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:27 np0005533252 python3.9[142834]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:28 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:29 np0005533252 python3.9[143000]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:38:29 np0005533252 systemd[1]: Reloading.
Nov 24 04:38:29 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:38:29 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:38:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:29 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:29 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:30 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:30 np0005533252 python3.9[143186]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:38:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:38:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:38:30 np0005533252 network[143203]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:38:30 np0005533252 network[143204]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:38:30 np0005533252 network[143205]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:38:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:31 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:31 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:32 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:32.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:32 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:38:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:33.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:33 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:33 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:34 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:34.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:35.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:38:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:35 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:38:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:36 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:36 np0005533252 python3.9[143470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:36 np0005533252 python3.9[143623]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:37 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:37 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:37 np0005533252 python3.9[143776]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:38 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:38 np0005533252 python3.9[143930]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:38 : epoch 69242784 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:38:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:39 np0005533252 python3.9[144083]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:39 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:39 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:39 np0005533252 python3.9[144236]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:40 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:40 np0005533252 podman[144362]: 2025-11-24 09:38:40.457115701 +0000 UTC m=+0.101801367 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:38:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:40.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:40 np0005533252 python3.9[144404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:38:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:41 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:41 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:42 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:42 np0005533252 python3.9[144592]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:43 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:43 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:44 np0005533252 python3.9[144744]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:44 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093844 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:38:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:44 np0005533252 python3.9[144898]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:45 np0005533252 python3.9[145050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:38:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:38:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:45 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae30003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:45 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:46 np0005533252 python3.9[145202]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:46 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:46.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:46 np0005533252 python3.9[145355]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:47 np0005533252 python3.9[145507]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:47 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:47 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae300038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:48 np0005533252 python3.9[145660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:48 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:48 np0005533252 podman[145784]: 2025-11-24 09:38:48.535259986 +0000 UTC m=+0.069481849 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:38:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:48 np0005533252 python3.9[145831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:49 np0005533252 python3.9[145983]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:49 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:49 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380033f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:50 np0005533252 python3.9[146135]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:50 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae300038f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:38:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:38:50 np0005533252 python3.9[146288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:51 np0005533252 python3.9[146440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:51 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:51 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae20003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:38:51 np0005533252 python3.9[146592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:38:52 np0005533252 kernel: ganesha.nfsd[142511]: segfault at 50 ip 00007faef302a32e sp 00007faec9ffa210 error 4 in libntirpc.so.5.8[7faef300f000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 24 04:38:52 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:38:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[141451]: 24/11/2025 09:38:52 : epoch 69242784 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae380044f0 fd 38 proxy ignored for local
Nov 24 04:38:52 np0005533252 systemd[1]: Started Process Core Dump (PID 146618/UID 0).
Nov 24 04:38:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:53 np0005533252 systemd-coredump[146619]: Process 141455 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007faef302a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:38:53 np0005533252 systemd[1]: systemd-coredump@4-146618-0.service: Deactivated successfully.
Nov 24 04:38:53 np0005533252 systemd[1]: systemd-coredump@4-146618-0.service: Consumed 1.247s CPU time.
Nov 24 04:38:53 np0005533252 python3.9[146747]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:53 np0005533252 podman[146755]: 2025-11-24 09:38:53.569910948 +0000 UTC m=+0.029564683 container died 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:38:53 np0005533252 systemd[1]: var-lib-containers-storage-overlay-6cc9e1a38f1f877fde7085e5844e3e0c94cc5d0c869a65827de724ee9e26bcda-merged.mount: Deactivated successfully.
Nov 24 04:38:53 np0005533252 podman[146755]: 2025-11-24 09:38:53.621657092 +0000 UTC m=+0.081310827 container remove 679112fde20091df69e7ec390984a19f2940b3f9ab05818fbcda8617354fdc82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 04:38:53 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:38:53 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:38:53 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.581s CPU time.
Nov 24 04:38:54 np0005533252 python3.9[146948]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:38:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:55 np0005533252 python3.9[147100]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:38:55 np0005533252 systemd[1]: Reloading.
Nov 24 04:38:55 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:38:55 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:38:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:56 np0005533252 python3.9[147287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:57 np0005533252 python3.9[147440]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093857 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:38:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:38:58 np0005533252 python3.9[147593]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:58 np0005533252 python3.9[147747]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:38:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:58.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:38:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:38:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:38:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:38:59 np0005533252 python3.9[147900]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:38:59 np0005533252 python3.9[148053]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:39:00 np0005533252 python3.9[148207]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:39:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:39:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:39:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:03 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 5.
Nov 24 04:39:03 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:39:03 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.581s CPU time.
Nov 24 04:39:03 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:39:04 np0005533252 podman[148380]: 2025-11-24 09:39:04.208472445 +0000 UTC m=+0.049717725 container create 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:39:04 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:39:04 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:39:04 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:39:04 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:39:04 np0005533252 podman[148380]: 2025-11-24 09:39:04.270951541 +0000 UTC m=+0.112196821 container init 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 04:39:04 np0005533252 podman[148380]: 2025-11-24 09:39:04.277450909 +0000 UTC m=+0.118696179 container start 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:39:04 np0005533252 podman[148380]: 2025-11-24 09:39:04.183888165 +0000 UTC m=+0.025133485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:39:04 np0005533252 bash[148380]: 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:39:04 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:39:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:39:04 np0005533252 python3.9[148466]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 04:39:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:04.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:05 np0005533252 python3.9[148641]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:39:06 np0005533252 python3.9[148800]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 04:39:06 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:39:06 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:39:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:07.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:07 np0005533252 python3.9[148961]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:39:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:08 np0005533252 python3.9[149046]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:39:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:39:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:39:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:39:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:10.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:39:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:11 np0005533252 podman[149057]: 2025-11-24 09:39:11.329314169 +0000 UTC m=+0.071792464 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 04:39:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:39:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:39:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:39:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:39:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:39:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9104000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:18.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:19 np0005533252 podman[149300]: 2025-11-24 09:39:19.313365325 +0000 UTC m=+0.052122213 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 04:39:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093919 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:39:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.037 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:39:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:39:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:39:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:39:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:39:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:39:20 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:39:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093922 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:39:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093923 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:39:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:39:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:39:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:39:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:39:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f00021c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:28.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:30 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:39:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:39:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:32 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:33 np0005533252 kernel: SELinux:  Converting 2772 SID table entries...
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:39:33 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:39:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:39:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:36 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:39:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:39:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:38 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:39:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:40 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:41 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 04:39:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:41 np0005533252 podman[149473]: 2025-11-24 09:39:41.74250881 +0000 UTC m=+0.103712026 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 04:39:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:42.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:39:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:43 np0005533252 kernel: SELinux:  Converting 2772 SID table entries...
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:39:43 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:39:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:44 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:44.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:39:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:39:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:46 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:48 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093948 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:39:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/093949 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:39:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:49.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:50 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:50 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 04:39:50 np0005533252 podman[149513]: 2025-11-24 09:39:50.343948757 +0000 UTC m=+0.065752629 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:39:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:52 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:52.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:54 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:39:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:54.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:39:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:55.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:56 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:56.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:58 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:39:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:39:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:39:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:39:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:39:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:59.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:39:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:39:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:39:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:40:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:40:00 np0005533252 ceph-mon[80009]: overall HEALTH_OK
Nov 24 04:40:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:00.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:02 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:07.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:08 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:12 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:12 np0005533252 podman[160190]: 2025-11-24 09:40:12.358933253 +0000 UTC m=+0.092535794 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 04:40:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:13.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:14 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:14.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:40:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:40:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:16.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:17.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:19.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:40:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:40:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:40:20.038 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:40:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:40:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:40:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:21.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:21 np0005533252 podman[165957]: 2025-11-24 09:40:21.301782425 +0000 UTC m=+0.047114749 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 04:40:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:23.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0002ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:25 np0005533252 podman[166549]: 2025-11-24 09:40:25.791687537 +0000 UTC m=+0.083138017 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:40:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 04:40:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:25 np0005533252 podman[166549]: 2025-11-24 09:40:25.92189452 +0000 UTC m=+0.213344990 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:40:26 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:26 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:26 np0005533252 podman[166688]: 2025-11-24 09:40:26.376653494 +0000 UTC m=+0.048568305 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:40:26 np0005533252 podman[166688]: 2025-11-24 09:40:26.412768074 +0000 UTC m=+0.084682885 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:40:26 np0005533252 podman[166763]: 2025-11-24 09:40:26.658265374 +0000 UTC m=+0.059258345 container exec 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 04:40:26 np0005533252 podman[166763]: 2025-11-24 09:40:26.670708128 +0000 UTC m=+0.071701089 container exec_died 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:40:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:26 np0005533252 podman[166829]: 2025-11-24 09:40:26.950013049 +0000 UTC m=+0.145304855 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:40:27 np0005533252 podman[166851]: 2025-11-24 09:40:27.02662491 +0000 UTC m=+0.055668214 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:40:27 np0005533252 podman[166829]: 2025-11-24 09:40:27.125358119 +0000 UTC m=+0.320649905 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:40:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:27 np0005533252 podman[166898]: 2025-11-24 09:40:27.313921563 +0000 UTC m=+0.050408742 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=)
Nov 24 04:40:27 np0005533252 podman[166898]: 2025-11-24 09:40:27.326775197 +0000 UTC m=+0.063262366 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=1793, name=keepalived, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 24 04:40:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:40:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:40:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:40:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:40:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:30 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:40:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:30 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:40:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:31.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:31 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:32 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:32.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:33.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:33 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:34 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:40:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:40:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:35 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:35 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:40:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:35 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:36 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:37 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:38 np0005533252 kernel: SELinux:  Converting 2773 SID table entries...
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability open_perms=1
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability always_check_network=0
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 04:40:38 np0005533252 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 04:40:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:38 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc001f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:38.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:39 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:40:39 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 04:40:39 np0005533252 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Nov 24 04:40:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0003940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:39 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:40 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:40.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:41 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:42 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:42 np0005533252 podman[167149]: 2025-11-24 09:40:42.84565527 +0000 UTC m=+0.213938221 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 04:40:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 04:40:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 04:40:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:43 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:44 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:40:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:40:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:45 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:46 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:46 np0005533252 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 04:40:46 np0005533252 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 04:40:46 np0005533252 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 04:40:46 np0005533252 systemd[1]: sshd.service: Consumed 2.103s CPU time, read 32.0K from disk, written 0B to disk.
Nov 24 04:40:46 np0005533252 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 04:40:46 np0005533252 systemd[1]: Stopping sshd-keygen.target...
Nov 24 04:40:46 np0005533252 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 04:40:46 np0005533252 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 04:40:46 np0005533252 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 04:40:46 np0005533252 systemd[1]: Reached target sshd-keygen.target.
Nov 24 04:40:46 np0005533252 systemd[1]: Starting OpenSSH server daemon...
Nov 24 04:40:46 np0005533252 systemd[1]: Started OpenSSH server daemon.
Nov 24 04:40:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:47 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:48 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:48 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:40:48 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:40:48 np0005533252 systemd[1]: Reloading.
Nov 24 04:40:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:48 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:40:48 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:40:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:49 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:40:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:49 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:50 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094051 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:40:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:51 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:52 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:52 np0005533252 podman[172376]: 2025-11-24 09:40:52.318734814 +0000 UTC m=+0.056734971 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 04:40:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:53 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:54 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:40:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:40:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:55 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:56 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:57 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:40:57 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:40:57 np0005533252 systemd[1]: man-db-cache-update.service: Consumed 10.367s CPU time.
Nov 24 04:40:57 np0005533252 systemd[1]: run-r73e53c4d84894e42b340f1be3eb04c6a.service: Deactivated successfully.
Nov 24 04:40:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:57.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:57 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:58 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:40:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:40:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:40:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:40:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:40:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:59.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:40:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:40:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:40:59 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:41:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:00 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:41:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:41:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:01 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:02 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4001670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:41:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:41:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:03 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:04 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:05 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:41:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:06 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:07.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:07 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4003710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:08 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:09.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:09 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:10 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094111 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:41:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:11 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:12 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:13.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:13 np0005533252 podman[176711]: 2025-11-24 09:41:13.360180099 +0000 UTC m=+0.101554041 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 24 04:41:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:13 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:14 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:14.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:41:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:41:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:15 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e4004030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:16 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:17 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:18 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:18.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:19 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:41:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:41:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:41:20.039 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:41:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:20 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:20.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:21.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:21 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:22 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:22.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:23 np0005533252 podman[176768]: 2025-11-24 09:41:23.306514124 +0000 UTC m=+0.049372111 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 24 04:41:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:23 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90fc002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:24 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:25.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:25 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:26 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:27 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90e00043b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:28 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90f0004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:28.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[148418]: 24/11/2025 09:41:29 : epoch 692427b8 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90d8003da0 fd 38 proxy ignored for local
Nov 24 04:41:29 np0005533252 kernel: ganesha.nfsd[149236]: segfault at 50 ip 00007f91affde32e sp 00007f916cff8210 error 4 in libntirpc.so.5.8[7f91affc3000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 24 04:41:29 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:41:29 np0005533252 systemd[1]: Started Process Core Dump (PID 176791/UID 0).
Nov 24 04:41:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:41:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:41:30 np0005533252 systemd-coredump[176792]: Process 148437 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f91affde32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:41:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:30 np0005533252 systemd[1]: systemd-coredump@5-176791-0.service: Deactivated successfully.
Nov 24 04:41:30 np0005533252 systemd[1]: systemd-coredump@5-176791-0.service: Consumed 1.151s CPU time.
Nov 24 04:41:30 np0005533252 podman[176850]: 2025-11-24 09:41:30.985813757 +0000 UTC m=+0.024365173 container died 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:41:31 np0005533252 systemd[1]: var-lib-containers-storage-overlay-ca848b0992b5514535531e6a96c6a662d50b4b69145fd2e2f67a3d9272fea15d-merged.mount: Deactivated successfully.
Nov 24 04:41:31 np0005533252 podman[176850]: 2025-11-24 09:41:31.029864728 +0000 UTC m=+0.068416124 container remove 72f08d8220aebf8a177859741b959495fc0d990644c83dbaa6c96d6a6ae331e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 04:41:31 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:41:31 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:41:31 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.622s CPU time.
Nov 24 04:41:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:31 np0005533252 python3.9[176968]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:41:31 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:31 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:31 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:32 np0005533252 python3.9[177159]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:41:32 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:32 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:32 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:33 np0005533252 python3.9[177348]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:41:33 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:33 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:33 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:34 np0005533252 python3.9[177539]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:41:34 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:34 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:34 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:41:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:41:36 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:41:36 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:41:36 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:41:36 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:41:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:38 np0005533252 python3.9[177813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:38 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094138 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:41:38 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:38 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:39 np0005533252 python3.9[178003]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:39 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:39 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:39 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:40 np0005533252 python3.9[178193]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:41:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:41:40 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:40 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:40 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:41 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 6.
Nov 24 04:41:41 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:41:41 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.622s CPU time.
Nov 24 04:41:41 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:41:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:41 np0005533252 podman[178431]: 2025-11-24 09:41:41.447186094 +0000 UTC m=+0.039058379 container create e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:41:41 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:41:41 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:41:41 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:41:41 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:41:41 np0005533252 podman[178431]: 2025-11-24 09:41:41.51761534 +0000 UTC m=+0.109487645 container init e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 04:41:41 np0005533252 podman[178431]: 2025-11-24 09:41:41.522676925 +0000 UTC m=+0.114549210 container start e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:41:41 np0005533252 podman[178431]: 2025-11-24 09:41:41.428867135 +0000 UTC m=+0.020739430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:41:41 np0005533252 bash[178431]: e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:41:41 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:41:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:41 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:41:41 np0005533252 python3.9[178463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:41 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:41:41 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:41:42 np0005533252 python3.9[178686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:42 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:42 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:42 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:42.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:43.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:44 np0005533252 podman[178804]: 2025-11-24 09:41:44.355196715 +0000 UTC m=+0.085173239 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 24 04:41:44 np0005533252 python3.9[178905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 04:41:44 np0005533252 systemd[1]: Reloading.
Nov 24 04:41:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:44 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:41:44 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:41:45 np0005533252 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 04:41:45 np0005533252 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 04:41:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:41:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:41:46 np0005533252 python3.9[179098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:46.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:46 np0005533252 python3.9[179254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:47 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:41:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:47 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:41:47 np0005533252 python3.9[179409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:48 np0005533252 python3.9[179565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:49.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:49 np0005533252 python3.9[179720]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:50 np0005533252 python3.9[179875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:50 np0005533252 python3.9[180031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:51.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:51 np0005533252 python3.9[180186]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:52 np0005533252 python3.9[180342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:53 np0005533252 python3.9[180497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:41:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:53 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:53 np0005533252 podman[180636]: 2025-11-24 09:41:53.814684659 +0000 UTC m=+0.098476515 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 04:41:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:54 np0005533252 python3.9[180683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:54 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:54 np0005533252 python3.9[180843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:55 np0005533252 python3.9[180998]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:55 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094155 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:41:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:55 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:56 np0005533252 python3.9[181154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 04:41:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:41:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:56 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:41:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:57 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:57 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:58 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:41:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:41:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:41:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:41:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:41:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:41:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:59.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:41:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:41:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:41:59 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:00 np0005533252 python3.9[181311]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:00 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:42:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:42:00 np0005533252 python3.9[181463]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:01 np0005533252 python3.9[181615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:01 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:01 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:02 np0005533252 python3.9[181768]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:02 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:42:02 np0005533252 python3.9[181945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:03 np0005533252 python3.9[182097]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:42:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:03 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:03 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:04 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:04 np0005533252 python3.9[182250]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:05.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:05 np0005533252 python3.9[182375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977324.2128937-1623-48127278510338/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:05 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efff8002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:05 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:06 np0005533252 python3.9[182528]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:06 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:06 np0005533252 python3.9[182653]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977325.6198177-1623-81990931702662/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:07 np0005533252 python3.9[182805]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:07.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:07 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:07 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe0001c60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:07 np0005533252 python3.9[182930]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977326.7134392-1623-157128733567949/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:08 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:08 np0005533252 python3.9[183083]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:08.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:09 np0005533252 python3.9[183208]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977327.9232705-1623-82231872348132/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:09.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:09 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:09 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:09 np0005533252 python3.9[183360]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:10 np0005533252 python3.9[183486]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977329.353372-1623-206547304598474/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:10 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:11 np0005533252 python3.9[183638]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:11.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:11 np0005533252 python3.9[183763]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977330.4801853-1623-236033184449756/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:11 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:11 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:12 np0005533252 python3.9[183916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:12 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:12 np0005533252 python3.9[184039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977331.6875618-1623-179535270591674/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:13 np0005533252 python3.9[184191]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:13.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:13 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:13 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:13 np0005533252 python3.9[184316]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977332.8295412-1623-269588633086473/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:14 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:14.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:15 np0005533252 podman[184342]: 2025-11-24 09:42:15.340247559 +0000 UTC m=+0.074481597 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:42:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:42:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:42:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:15.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:15 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:15 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:16 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:16 np0005533252 python3.9[184496]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 04:42:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:17 np0005533252 python3.9[184649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:17 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:17 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:18 np0005533252 python3.9[184802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:18 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:18 np0005533252 python3.9[184954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:19.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:19 np0005533252 python3.9[185106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:19 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:19 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0004009c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:42:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:42:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:42:20.041 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:42:20 np0005533252 python3.9[185259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:20 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd4002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:20 np0005533252 python3.9[185411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:20.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:21 np0005533252 python3.9[185563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:21.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:21 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe00041f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[178466]: 24/11/2025 09:42:21 : epoch 69242855 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003c10 fd 37 proxy ignored for local
Nov 24 04:42:21 np0005533252 kernel: ganesha.nfsd[180639]: segfault at 50 ip 00007f00ad40532e sp 00007f00657f9210 error 4 in libntirpc.so.5.8[7f00ad3ea000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 04:42:21 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:42:21 np0005533252 systemd[1]: Started Process Core Dump (PID 185715/UID 0).
Nov 24 04:42:21 np0005533252 python3.9[185716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:22 np0005533252 python3.9[185893]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:22 np0005533252 systemd-coredump[185717]: Process 178470 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f00ad40532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:42:22 np0005533252 systemd[1]: systemd-coredump@6-185715-0.service: Deactivated successfully.
Nov 24 04:42:22 np0005533252 systemd[1]: systemd-coredump@6-185715-0.service: Consumed 1.032s CPU time.
Nov 24 04:42:22 np0005533252 podman[186029]: 2025-11-24 09:42:22.888196321 +0000 UTC m=+0.022905223 container died e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:42:22 np0005533252 systemd[1]: var-lib-containers-storage-overlay-d26831df6c5a0f65cfaba17b14bc54bef5a769ed56fa20fd9c93db15c5a4a386-merged.mount: Deactivated successfully.
Nov 24 04:42:22 np0005533252 podman[186029]: 2025-11-24 09:42:22.920008881 +0000 UTC m=+0.054717763 container remove e3c4baedaffcfce23f2147ef6f93604f265d93ee34d2b6a77a0ede860308372e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:42:22 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:42:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:23 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:42:23 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.180s CPU time.
Nov 24 04:42:23 np0005533252 python3.9[186067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:23.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:23 np0005533252 python3.9[186246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:24 np0005533252 podman[186371]: 2025-11-24 09:42:24.156376901 +0000 UTC m=+0.072952299 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 04:42:24 np0005533252 python3.9[186416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:24 np0005533252 python3.9[186570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:25 np0005533252 python3.9[186722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:26.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094227 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:42:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094227 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:42:28 np0005533252 python3.9[186875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:28 np0005533252 python3.9[186999]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977347.458738-2286-145390563149549/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:28.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:29 np0005533252 python3.9[187151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:29 np0005533252 python3.9[187274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977348.724612-2286-35281415839000/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:30 np0005533252 python3.9[187427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:42:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:42:30 np0005533252 python3.9[187550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977350.0146854-2286-133346692089887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:30.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:31 np0005533252 python3.9[187702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:31 np0005533252 python3.9[187825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977351.0238206-2286-263474250825543/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:32 np0005533252 python3.9[187978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:32.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:33 np0005533252 python3.9[188101]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977352.1547716-2286-64856941647534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:33 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 7.
Nov 24 04:42:33 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:42:33 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.180s CPU time.
Nov 24 04:42:33 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:42:33 np0005533252 podman[188272]: 2025-11-24 09:42:33.462694649 +0000 UTC m=+0.041140369 container create 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:42:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:42:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:42:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:42:33 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:42:33 np0005533252 podman[188272]: 2025-11-24 09:42:33.519964693 +0000 UTC m=+0.098410423 container init 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:42:33 np0005533252 podman[188272]: 2025-11-24 09:42:33.527054807 +0000 UTC m=+0.105500517 container start 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:42:33 np0005533252 bash[188272]: 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494
Nov 24 04:42:33 np0005533252 podman[188272]: 2025-11-24 09:42:33.445705942 +0000 UTC m=+0.024151672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:42:33 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:42:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:42:33 np0005533252 python3.9[188313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:34 np0005533252 python3.9[188481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977353.2072027-2286-152026052386695/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:34 np0005533252 python3.9[188633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.128868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355128912, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4660, "num_deletes": 502, "total_data_size": 12833106, "memory_usage": 12990160, "flush_reason": "Manual Compaction"}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355202860, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8337447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13340, "largest_seqno": 17994, "table_properties": {"data_size": 8319676, "index_size": 12025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36611, "raw_average_key_size": 19, "raw_value_size": 8283149, "raw_average_value_size": 4458, "num_data_blocks": 525, "num_entries": 1858, "num_filter_entries": 1858, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976901, "oldest_key_time": 1763976901, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 74164 microseconds, and 14811 cpu microseconds.
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.203030) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8337447 bytes OK
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.203087) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219838) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219881) EVENT_LOG_v1 {"time_micros": 1763977355219873, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.219903) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12812584, prev total WAL file size 12812584, number of live WAL files 2.
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.222973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8142KB)], [27(12MB)]
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355223218, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21278409, "oldest_snapshot_seqno": -1}
Nov 24 04:42:35 np0005533252 python3.9[188756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977354.3437433-2286-53731916561259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5098 keys, 15470082 bytes, temperature: kUnknown
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355432132, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15470082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15431230, "index_size": 24982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 127564, "raw_average_key_size": 25, "raw_value_size": 15334178, "raw_average_value_size": 3007, "num_data_blocks": 1048, "num_entries": 5098, "num_filter_entries": 5098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.432641) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15470082 bytes
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.468140) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 73.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.3 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6120, records dropped: 1022 output_compression: NoCompression
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.468200) EVENT_LOG_v1 {"time_micros": 1763977355468175, "job": 14, "event": "compaction_finished", "compaction_time_micros": 209212, "compaction_time_cpu_micros": 33785, "output_level": 6, "num_output_files": 1, "total_output_size": 15470082, "num_input_records": 6120, "num_output_records": 5098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355470268, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 24 04:42:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355473315, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.222843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:35 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:42:35.473436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:42:36 np0005533252 python3.9[188908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:36 np0005533252 python3.9[189032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977355.54179-2286-133074154356287/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:37 np0005533252 python3.9[189184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:37.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:37 np0005533252 python3.9[189307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977356.7342687-2286-247192707197602/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:38 np0005533252 python3.9[189460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094238 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:42:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/094238 (4) : haproxy version is 2.3.17-d1c9119
Nov 24 04:42:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [NOTICE] 327/094238 (4) : path to executable is /usr/local/sbin/haproxy
Nov 24 04:42:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/094238 (4) : backend 'backend' has no server available!
Nov 24 04:42:38 np0005533252 python3.9[189583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977357.904956-2286-180587987758880/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:38.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:39 np0005533252 python3.9[189735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:42:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 04:42:39 np0005533252 python3.9[189858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977358.9922838-2286-238558637813457/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:40 np0005533252 python3.9[190011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:41 np0005533252 python3.9[190134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977360.1352117-2286-96924755206331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:41 np0005533252 python3.9[190367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:42 np0005533252 python3.9[190491]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977361.483262-2286-236081785429874/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:42.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:43 np0005533252 python3.9[190668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:42:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:43 np0005533252 python3.9[190791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977362.6367795-2286-150128124622534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 04:42:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:42:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:42:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:44.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:42:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:42:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:42:45 np0005533252 podman[190916]: 2025-11-24 09:42:45.858922859 +0000 UTC m=+0.077167113 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 04:42:45 np0005533252 python3.9[190955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:42:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:42:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:42:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:46.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:46 np0005533252 python3.9[191124]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 04:42:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:42:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:42:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:49 np0005533252 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 04:42:49 np0005533252 python3.9[191296]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:49 np0005533252 python3.9[191448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:50 np0005533252 python3.9[191601]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:42:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:42:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:50 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:42:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:42:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:51 np0005533252 python3.9[191778]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:51 np0005533252 python3.9[191930]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094251 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:42:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:42:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:52.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:42:53 np0005533252 python3.9[192083]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094253 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:42:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:53 np0005533252 python3.9[192235]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:42:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:54 np0005533252 python3.9[192388]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:54 np0005533252 podman[192389]: 2025-11-24 09:42:54.356209815 +0000 UTC m=+0.079709386 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:42:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:54 np0005533252 python3.9[192559]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:55.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:55 np0005533252 python3.9[192711]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:42:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:56 np0005533252 python3.9[192864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:42:56 np0005533252 systemd[1]: Reloading.
Nov 24 04:42:56 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:42:56 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:42:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:57 np0005533252 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 04:42:57 np0005533252 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 04:42:57 np0005533252 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 04:42:57 np0005533252 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 04:42:57 np0005533252 systemd[1]: Starting libvirt logging daemon...
Nov 24 04:42:57 np0005533252 systemd[1]: Started libvirt logging daemon.
Nov 24 04:42:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:57.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:58 np0005533252 python3.9[193057]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:42:58 np0005533252 systemd[1]: Reloading.
Nov 24 04:42:58 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:42:58 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:42:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:58 np0005533252 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 04:42:58 np0005533252 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 04:42:58 np0005533252 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 04:42:58 np0005533252 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 04:42:58 np0005533252 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 04:42:58 np0005533252 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 04:42:58 np0005533252 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 04:42:58 np0005533252 systemd[1]: Started libvirt nodedev daemon.
Nov 24 04:42:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:42:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:42:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:42:59 np0005533252 python3.9[193274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:42:59 np0005533252 systemd[1]: Reloading.
Nov 24 04:42:59 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:42:59 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:42:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:42:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:42:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:42:59 np0005533252 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 04:42:59 np0005533252 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 04:42:59 np0005533252 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 04:42:59 np0005533252 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 04:42:59 np0005533252 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 04:42:59 np0005533252 systemd[1]: Starting libvirt proxy daemon...
Nov 24 04:42:59 np0005533252 systemd[1]: Started libvirt proxy daemon.
Nov 24 04:42:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:42:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:42:59 np0005533252 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 04:43:00 np0005533252 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 04:43:00 np0005533252 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 04:43:00 np0005533252 python3.9[193494]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:43:00 np0005533252 systemd[1]: Reloading.
Nov 24 04:43:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:43:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:43:00 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:43:00 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:43:00 np0005533252 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 04:43:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094300 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:43:00 np0005533252 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 04:43:00 np0005533252 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 04:43:00 np0005533252 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 04:43:00 np0005533252 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 04:43:00 np0005533252 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 04:43:00 np0005533252 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 04:43:00 np0005533252 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 04:43:00 np0005533252 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 04:43:00 np0005533252 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 04:43:00 np0005533252 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 04:43:00 np0005533252 systemd[1]: Started libvirt QEMU daemon.
Nov 24 04:43:00 np0005533252 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 374d308f-c9cc-4cff-9bc3-7134ee063b95
Nov 24 04:43:00 np0005533252 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 24 04:43:00 np0005533252 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 374d308f-c9cc-4cff-9bc3-7134ee063b95
Nov 24 04:43:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:00 np0005533252 setroubleshoot[193311]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 24 04:43:01 np0005533252 python3.9[193713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:43:01 np0005533252 systemd[1]: Reloading.
Nov 24 04:43:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:01 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:43:01 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:43:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:01 np0005533252 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 04:43:01 np0005533252 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 04:43:01 np0005533252 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 04:43:01 np0005533252 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 04:43:01 np0005533252 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 04:43:01 np0005533252 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 04:43:01 np0005533252 systemd[1]: Starting libvirt secret daemon...
Nov 24 04:43:01 np0005533252 systemd[1]: Started libvirt secret daemon.
Nov 24 04:43:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:03 np0005533252 python3.9[193950]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:04 np0005533252 python3.9[194103]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:43:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:43:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:43:05 np0005533252 python3.9[194255]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:06 np0005533252 python3.9[194410]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:43:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:07 np0005533252 python3.9[194560]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:07 np0005533252 python3.9[194681]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977386.6900094-3360-160950336240288/.source.xml follow=False _original_basename=secret.xml.j2 checksum=50e2d7af60e90224d932c14cb656694b42455a32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:08 np0005533252 python3.9[194834]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 84a084c3-61a7-5de7-8207-1f88efa59a64#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:09 np0005533252 python3.9[194996]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:11 np0005533252 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 04:43:11 np0005533252 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 04:43:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:12 np0005533252 python3.9[195461]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:13 np0005533252 python3.9[195613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:14 np0005533252 python3.9[195737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977393.3419552-3525-228444245890567/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:14.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:15 np0005533252 python3.9[195889]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:15 np0005533252 auditd[703]: Audit daemon rotating log files
Nov 24 04:43:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:43:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:43:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:16 np0005533252 podman[196014]: 2025-11-24 09:43:16.169953926 +0000 UTC m=+0.076068875 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 04:43:16 np0005533252 python3.9[196062]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:16 np0005533252 python3.9[196146]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:16.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:17 np0005533252 python3.9[196298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:18 np0005533252 python3.9[196377]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9hfc3gfd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.370814) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398370903, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 252, "total_data_size": 1399932, "memory_usage": 1419952, "flush_reason": "Manual Compaction"}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398379776, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 646954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17999, "largest_seqno": 18676, "table_properties": {"data_size": 643929, "index_size": 933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7731, "raw_average_key_size": 19, "raw_value_size": 637731, "raw_average_value_size": 1647, "num_data_blocks": 41, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977356, "oldest_key_time": 1763977356, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 9037 microseconds, and 5007 cpu microseconds.
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.379850) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 646954 bytes OK
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.379873) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381200) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381213) EVENT_LOG_v1 {"time_micros": 1763977398381209, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1396202, prev total WAL file size 1396202, number of live WAL files 2.
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381783) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(631KB)], [30(14MB)]
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398381832, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16117036, "oldest_snapshot_seqno": -1}
Nov 24 04:43:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:18 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4984 keys, 12264381 bytes, temperature: kUnknown
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398450012, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12264381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12230374, "index_size": 20457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125569, "raw_average_key_size": 25, "raw_value_size": 12139400, "raw_average_value_size": 2435, "num_data_blocks": 850, "num_entries": 4984, "num_filter_entries": 4984, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.450295) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12264381 bytes
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451488) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.0 rd, 179.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(43.9) write-amplify(19.0) OK, records in: 5485, records dropped: 501 output_compression: NoCompression
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451513) EVENT_LOG_v1 {"time_micros": 1763977398451502, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68287, "compaction_time_cpu_micros": 45190, "output_level": 6, "num_output_files": 1, "total_output_size": 12264381, "num_input_records": 5485, "num_output_records": 4984, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398452211, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398456630, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.381679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:43:18.456744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:43:18 np0005533252 python3.9[196529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:19 np0005533252 python3.9[196607]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.042 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:43:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.043 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:43:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:43:20.043 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:43:20 np0005533252 python3.9[196763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:20 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:21.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:21 np0005533252 python3[196916]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 04:43:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:22 np0005533252 python3.9[197069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:22 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:22 np0005533252 python3.9[197147]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:23.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:23 np0005533252 python3.9[197324]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:24 np0005533252 python3.9[197403]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:24 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:24 np0005533252 podman[197527]: 2025-11-24 09:43:24.921457265 +0000 UTC m=+0.056360472 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 04:43:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:25.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:25 np0005533252 python3.9[197574]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:25 np0005533252 python3.9[197653]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:26 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:26 np0005533252 python3.9[197806]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:26 np0005533252 python3.9[197884]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:27 np0005533252 python3.9[198036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:28 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:28 np0005533252 python3.9[198162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977407.3683364-3900-240333734578460/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:29 np0005533252 python3.9[198314]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:30 np0005533252 python3.9[198467]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:43:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:30 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:43:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:31 np0005533252 python3.9[198622]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:32 np0005533252 python3.9[198775]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:32 np0005533252 python3.9[198928]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:43:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:33 np0005533252 python3.9[199082]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:43:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:34 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:34 np0005533252 python3.9[199238]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:35 np0005533252 python3.9[199390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:35.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:35 np0005533252 python3.9[199513]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977414.9819233-4116-252288682421007/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:36 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:36 np0005533252 python3.9[199666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:37 np0005533252 python3.9[199789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977416.4636953-4161-130955348161502/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:37.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:38 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:38 np0005533252 python3.9[199942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:43:39 np0005533252 python3.9[200065]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977418.037891-4206-53680428047407/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:43:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094339 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:43:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:40 np0005533252 python3.9[200217]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:43:40 np0005533252 systemd[1]: Reloading.
Nov 24 04:43:40 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:43:40 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:43:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:40 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:40 np0005533252 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 04:43:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:41 np0005533252 python3.9[200410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 04:43:41 np0005533252 systemd[1]: Reloading.
Nov 24 04:43:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:41 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:43:41 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:43:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:41 np0005533252 systemd[1]: Reloading.
Nov 24 04:43:41 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:43:41 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:43:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:42 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:42 np0005533252 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 04:43:42 np0005533252 systemd[1]: session-52.scope: Consumed 3min 19.360s CPU time.
Nov 24 04:43:42 np0005533252 systemd-logind[823]: Session 52 logged out. Waiting for processes to exit.
Nov 24 04:43:42 np0005533252 systemd-logind[823]: Removed session 52.
Nov 24 04:43:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:43:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:43:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:45.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:46 np0005533252 podman[200535]: 2025-11-24 09:43:46.397284237 +0000 UTC m=+0.125445898 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:43:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:46 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:43:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:48 np0005533252 systemd-logind[823]: New session 53 of user zuul.
Nov 24 04:43:48 np0005533252 systemd[1]: Started Session 53 of User zuul.
Nov 24 04:43:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:49 np0005533252 python3.9[200715]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:43:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:50 np0005533252 python3.9[200870]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:43:50 np0005533252 network[200887]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:43:50 np0005533252 network[200888]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:43:50 np0005533252 network[200889]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:43:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:43:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:43:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:51.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:43:51 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:43:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:43:51 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:43:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:43:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:43:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:43:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000015:nfs.cephfs.0: -2
Nov 24 04:43:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:43:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:55 np0005533252 python3.9[201246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 04:43:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:55.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:55 np0005533252 podman[201255]: 2025-11-24 09:43:55.342708699 +0000 UTC m=+0.063678813 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 04:43:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:43:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:43:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:56 np0005533252 python3.9[201349]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:43:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:43:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:43:56 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:43:56 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:43:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:57.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:57.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:43:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:43:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:43:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094359 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:43:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:43:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:43:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:43:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:43:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:43:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:44:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:44:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:02 np0005533252 python3.9[201531]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:44:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:03 np0005533252 python3.9[201708]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:44:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:04 np0005533252 python3.9[201862]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:44:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:04 np0005533252 python3.9[202014]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:44:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:05 np0005533252 python3.9[202167]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:06 np0005533252 python3.9[202291]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977445.3305414-246-187096666658306/.source.iscsi _original_basename=.t0qswyzp follow=False checksum=cd5378efa417da90db15e5c3bc37bc9ae6376a29 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:07.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:07 np0005533252 python3.9[202443]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:08 np0005533252 python3.9[202596]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:44:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:44:09 np0005533252 python3.9[202748]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:44:09 np0005533252 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 04:44:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:09.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:10 np0005533252 python3.9[202905]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:44:10 np0005533252 systemd[1]: Reloading.
Nov 24 04:44:10 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:44:10 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:44:10 np0005533252 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 04:44:10 np0005533252 systemd[1]: Starting Open-iSCSI...
Nov 24 04:44:10 np0005533252 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 04:44:10 np0005533252 systemd[1]: Started Open-iSCSI.
Nov 24 04:44:10 np0005533252 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 04:44:10 np0005533252 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 04:44:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:11.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:13.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:13 np0005533252 python3.9[203108]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:44:13 np0005533252 network[203125]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:44:13 np0005533252 network[203126]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:44:13 np0005533252 network[203127]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:44:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:44:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:44:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:16 np0005533252 podman[203205]: 2025-11-24 09:44:16.518830384 +0000 UTC m=+0.077758321 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 04:44:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:18 np0005533252 python3.9[203426]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 04:44:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:18 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:19 np0005533252 python3.9[203578]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 04:44:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:19 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:19 np0005533252 python3.9[203734]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:44:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:44:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:44:20.044 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:44:20 np0005533252 python3.9[203858]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977459.400968-477-131859846386984/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:20 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094421 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:44:21 np0005533252 python3.9[204010]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:21 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:22 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:22 np0005533252 python3.9[204163]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:44:22 np0005533252 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 04:44:22 np0005533252 systemd[1]: Stopped Load Kernel Modules.
Nov 24 04:44:22 np0005533252 systemd[1]: Stopping Load Kernel Modules...
Nov 24 04:44:22 np0005533252 systemd[1]: Starting Load Kernel Modules...
Nov 24 04:44:22 np0005533252 systemd[1]: Finished Load Kernel Modules.
Nov 24 04:44:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:23.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:23 np0005533252 python3.9[204344]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:23 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:24 np0005533252 python3.9[204497]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:44:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:24 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:25 np0005533252 python3.9[204649]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:44:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:25 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:25 np0005533252 podman[204773]: 2025-11-24 09:44:25.915266132 +0000 UTC m=+0.097942659 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 04:44:26 np0005533252 python3.9[204817]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:26 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:26 np0005533252 python3.9[204943]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977465.5503452-651-81210425115099/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000072s ======
Nov 24 04:44:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Nov 24 04:44:27 np0005533252 python3.9[205095]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:44:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:27.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:27 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:28 np0005533252 python3.9[205249]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:28 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:29 np0005533252 python3.9[205401]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:44:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:29 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:29 np0005533252 python3.9[205553]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:44:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:44:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:30 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:30 np0005533252 python3.9[205706]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:31 np0005533252 python3.9[205858]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:31 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:31 np0005533252 python3.9[206010]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:32 np0005533252 python3.9[206163]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:44:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:32 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:44:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:33 np0005533252 python3.9[206315]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:44:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:33 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:34 np0005533252 python3.9[206470]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:34 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:35 np0005533252 python3.9[206622]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:44:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:35 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:36 np0005533252 python3.9[206775]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:36 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:36 np0005533252 python3.9[206853]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:37.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:37 np0005533252 python3.9[207005]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:37 np0005533252 python3.9[207083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:37 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:38 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:38 np0005533252 python3.9[207236]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:39.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:39 np0005533252 python3.9[207388]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:39 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:39 np0005533252 python3.9[207466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:40 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:40 np0005533252 python3.9[207619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:41.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:41 np0005533252 python3.9[207697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094441 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:44:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:41.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:41 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f8003b50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:42 np0005533252 python3.9[207849]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:44:42 np0005533252 systemd[1]: Reloading.
Nov 24 04:44:42 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:44:42 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:44:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:42 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:43.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:43 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:43 np0005533252 python3.9[208063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:44 np0005533252 python3.9[208144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:44 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:45 np0005533252 python3.9[208296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:45.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:44:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:44:45 np0005533252 python3.9[208374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:45.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:45 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:46 np0005533252 python3.9[208527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:44:46 np0005533252 systemd[1]: Reloading.
Nov 24 04:44:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:46 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:46 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:44:46 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:44:46 np0005533252 systemd[1]: Starting Create netns directory...
Nov 24 04:44:46 np0005533252 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 04:44:46 np0005533252 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 04:44:46 np0005533252 systemd[1]: Finished Create netns directory.
Nov 24 04:44:46 np0005533252 podman[208564]: 2025-11-24 09:44:46.874778088 +0000 UTC m=+0.092896595 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 04:44:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:47 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:47 np0005533252 python3.9[208746]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:48 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:48 np0005533252 python3.9[208899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:49 np0005533252 python3.9[209022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977488.2253418-1272-143177971968933/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:49.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:49 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:50 np0005533252 python3.9[209175]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:44:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:50 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:51 np0005533252 python3.9[209327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:44:51 np0005533252 python3.9[209450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977490.678763-1347-272991374662609/.source.json _original_basename=.ydzvx2lm follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:51.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a2f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:51 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:52 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094452 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:44:52 np0005533252 python3.9[209603]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:44:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:53 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:54 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:55 np0005533252 python3.9[210031]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 04:44:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:55.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:55 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:56 np0005533252 podman[210156]: 2025-11-24 09:44:56.249243014 +0000 UTC m=+0.050445127 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:44:56 np0005533252 python3.9[210202]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 04:44:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:56 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:57 np0005533252 python3.9[210404]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 04:44:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:57.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:57 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:44:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.401454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498401517, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1150, "num_deletes": 255, "total_data_size": 2722643, "memory_usage": 2762160, "flush_reason": "Manual Compaction"}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498411315, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1799304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18681, "largest_seqno": 19826, "table_properties": {"data_size": 1794258, "index_size": 2570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10170, "raw_average_key_size": 18, "raw_value_size": 1784267, "raw_average_value_size": 3232, "num_data_blocks": 115, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977399, "oldest_key_time": 1763977399, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9969 microseconds, and 4174 cpu microseconds.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.411429) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1799304 bytes OK
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.411486) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412765) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412779) EVENT_LOG_v1 {"time_micros": 1763977498412776, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412794) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2717074, prev total WAL file size 2717074, number of live WAL files 2.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1757KB)], [33(11MB)]
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498413950, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14063685, "oldest_snapshot_seqno": -1}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5012 keys, 13582532 bytes, temperature: kUnknown
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498483554, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13582532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13547529, "index_size": 21389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127282, "raw_average_key_size": 25, "raw_value_size": 13455256, "raw_average_value_size": 2684, "num_data_blocks": 878, "num_entries": 5012, "num_filter_entries": 5012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.483886) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13582532 bytes
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.485246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.7 rd, 194.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.7 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(15.4) write-amplify(7.5) OK, records in: 5536, records dropped: 524 output_compression: NoCompression
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.485262) EVENT_LOG_v1 {"time_micros": 1763977498485255, "job": 18, "event": "compaction_finished", "compaction_time_micros": 69741, "compaction_time_cpu_micros": 25581, "output_level": 6, "num_output_files": 1, "total_output_size": 13582532, "num_input_records": 5536, "num_output_records": 5012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498485801, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498488261, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:44:58.488499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:44:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:58 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:58 np0005533252 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:44:58 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:44:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:44:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:44:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:44:59 np0005533252 python3[210617]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 04:44:59 np0005533252 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 04:44:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:44:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:44:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:59.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:44:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:44:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:44:59 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:45:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:45:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:00 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e00029f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:00 np0005533252 podman[210630]: 2025-11-24 09:45:00.882741856 +0000 UTC m=+1.244656420 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 04:45:01 np0005533252 podman[210691]: 2025-11-24 09:45:01.029580982 +0000 UTC m=+0.048582321 container create 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 04:45:01 np0005533252 podman[210691]: 2025-11-24 09:45:01.005323943 +0000 UTC m=+0.024325312 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 04:45:01 np0005533252 python3[210617]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 04:45:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:01 np0005533252 python3.9[210880]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:45:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:01 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:45:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:45:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:45:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:02 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:02 np0005533252 python3.9[211035]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:02 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:45:02 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:45:03 np0005533252 python3.9[211136]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:45:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:45:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:45:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:03 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:04 np0005533252 python3.9[211312]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977503.1061673-1611-88957895392269/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:04 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:04 np0005533252 python3.9[211389]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:45:04 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:04 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:04 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:45:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:45:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:45:05 np0005533252 python3.9[211499]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:05 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:05 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:05 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:05.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:05 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:05 np0005533252 systemd[1]: Starting multipathd container...
Nov 24 04:45:05 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:45:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:05 np0005533252 systemd[1]: Started /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 04:45:05 np0005533252 podman[211539]: 2025-11-24 09:45:05.990875248 +0000 UTC m=+0.112193792 container init 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 04:45:06 np0005533252 multipathd[211555]: + sudo -E kolla_set_configs
Nov 24 04:45:06 np0005533252 podman[211539]: 2025-11-24 09:45:06.019849913 +0000 UTC m=+0.141168427 container start 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:45:06 np0005533252 podman[211539]: multipathd
Nov 24 04:45:06 np0005533252 systemd[1]: Started multipathd container.
Nov 24 04:45:06 np0005533252 multipathd[211555]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:45:06 np0005533252 multipathd[211555]: INFO:__main__:Validating config file
Nov 24 04:45:06 np0005533252 multipathd[211555]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:45:06 np0005533252 multipathd[211555]: INFO:__main__:Writing out command to execute
Nov 24 04:45:06 np0005533252 multipathd[211555]: ++ cat /run_command
Nov 24 04:45:06 np0005533252 multipathd[211555]: + CMD='/usr/sbin/multipathd -d'
Nov 24 04:45:06 np0005533252 multipathd[211555]: + ARGS=
Nov 24 04:45:06 np0005533252 multipathd[211555]: + sudo kolla_copy_cacerts
Nov 24 04:45:06 np0005533252 multipathd[211555]: Running command: '/usr/sbin/multipathd -d'
Nov 24 04:45:06 np0005533252 multipathd[211555]: + [[ ! -n '' ]]
Nov 24 04:45:06 np0005533252 multipathd[211555]: + . kolla_extend_start
Nov 24 04:45:06 np0005533252 multipathd[211555]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 04:45:06 np0005533252 multipathd[211555]: + umask 0022
Nov 24 04:45:06 np0005533252 multipathd[211555]: + exec /usr/sbin/multipathd -d
Nov 24 04:45:06 np0005533252 podman[211562]: 2025-11-24 09:45:06.11734071 +0000 UTC m=+0.087794288 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 04:45:06 np0005533252 multipathd[211555]: 3449.728612 | --------start up--------
Nov 24 04:45:06 np0005533252 multipathd[211555]: 3449.728629 | read /etc/multipath.conf
Nov 24 04:45:06 np0005533252 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 04:45:06 np0005533252 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.service: Failed with result 'exit-code'.
Nov 24 04:45:06 np0005533252 multipathd[211555]: 3449.733737 | path checkers start up
Nov 24 04:45:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:06 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:07 np0005533252 python3.9[211744]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:45:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:07 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:45:08 np0005533252 python3.9[211899]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:08 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e0003af0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:09 np0005533252 python3.9[212064]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:45:09 np0005533252 systemd[1]: Stopping multipathd container...
Nov 24 04:45:09 np0005533252 multipathd[211555]: 3452.919644 | exit (signal)
Nov 24 04:45:09 np0005533252 multipathd[211555]: 3452.919704 | --------shut down-------
Nov 24 04:45:09 np0005533252 systemd[1]: libpod-16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.scope: Deactivated successfully.
Nov 24 04:45:09 np0005533252 podman[212068]: 2025-11-24 09:45:09.342647564 +0000 UTC m=+0.073757713 container died 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:45:09 np0005533252 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-1f84109cc96ee636.timer: Deactivated successfully.
Nov 24 04:45:09 np0005533252 systemd[1]: Stopped /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 04:45:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-userdata-shm.mount: Deactivated successfully.
Nov 24 04:45:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay-9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3-merged.mount: Deactivated successfully.
Nov 24 04:45:09 np0005533252 podman[212068]: 2025-11-24 09:45:09.520365923 +0000 UTC m=+0.251476072 container cleanup 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 04:45:09 np0005533252 podman[212068]: multipathd
Nov 24 04:45:09 np0005533252 podman[212097]: multipathd
Nov 24 04:45:09 np0005533252 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 04:45:09 np0005533252 systemd[1]: Stopped multipathd container.
Nov 24 04:45:09 np0005533252 systemd[1]: Starting multipathd container...
Nov 24 04:45:09 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:45:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a72736bce2b1fff6e10002580058dd636e902f41fddbd4e487c2d61fd3699f3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:09.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:09 np0005533252 systemd[1]: Started /usr/bin/podman healthcheck run 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c.
Nov 24 04:45:09 np0005533252 podman[212110]: 2025-11-24 09:45:09.728514044 +0000 UTC m=+0.101915118 container init 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 04:45:09 np0005533252 multipathd[212127]: + sudo -E kolla_set_configs
Nov 24 04:45:09 np0005533252 podman[212110]: 2025-11-24 09:45:09.758008262 +0000 UTC m=+0.131409306 container start 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 24 04:45:09 np0005533252 podman[212110]: multipathd
Nov 24 04:45:09 np0005533252 systemd[1]: Started multipathd container.
Nov 24 04:45:09 np0005533252 multipathd[212127]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:45:09 np0005533252 multipathd[212127]: INFO:__main__:Validating config file
Nov 24 04:45:09 np0005533252 multipathd[212127]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:45:09 np0005533252 multipathd[212127]: INFO:__main__:Writing out command to execute
Nov 24 04:45:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:09 np0005533252 multipathd[212127]: ++ cat /run_command
Nov 24 04:45:09 np0005533252 multipathd[212127]: + CMD='/usr/sbin/multipathd -d'
Nov 24 04:45:09 np0005533252 multipathd[212127]: + ARGS=
Nov 24 04:45:09 np0005533252 multipathd[212127]: + sudo kolla_copy_cacerts
Nov 24 04:45:09 np0005533252 podman[212134]: 2025-11-24 09:45:09.815718887 +0000 UTC m=+0.049172815 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:45:09 np0005533252 multipathd[212127]: + [[ ! -n '' ]]
Nov 24 04:45:09 np0005533252 multipathd[212127]: + . kolla_extend_start
Nov 24 04:45:09 np0005533252 multipathd[212127]: Running command: '/usr/sbin/multipathd -d'
Nov 24 04:45:09 np0005533252 multipathd[212127]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 04:45:09 np0005533252 multipathd[212127]: + umask 0022
Nov 24 04:45:09 np0005533252 multipathd[212127]: + exec /usr/sbin/multipathd -d
Nov 24 04:45:09 np0005533252 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-3e19af4066309da.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 04:45:09 np0005533252 systemd[1]: 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c-3e19af4066309da.service: Failed with result 'exit-code'.
Nov 24 04:45:09 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:09 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:09 np0005533252 multipathd[212127]: 3453.443380 | --------start up--------
Nov 24 04:45:09 np0005533252 multipathd[212127]: 3453.443398 | read /etc/multipath.conf
Nov 24 04:45:09 np0005533252 multipathd[212127]: 3453.447961 | path checkers start up
Nov 24 04:45:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:10 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:10 np0005533252 python3.9[212320]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:11 np0005533252 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 04:45:11 np0005533252 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 04:45:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:11 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:11 np0005533252 python3.9[212474]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 04:45:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:12 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:12 np0005533252 python3.9[212627]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 04:45:12 np0005533252 kernel: Key type psk registered
Nov 24 04:45:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:13.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:13.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:13 np0005533252 python3.9[212790]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:45:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe40c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:13 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:13 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:14 np0005533252 python3.9[212914]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977513.281919-1851-49564879507983/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:14 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094514 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:45:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:15.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:15 np0005533252 python3.9[213066]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:45:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:45:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:15 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3d4003fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:16 np0005533252 python3.9[213219]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:45:16 np0005533252 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 04:45:16 np0005533252 systemd[1]: Stopped Load Kernel Modules.
Nov 24 04:45:16 np0005533252 systemd[1]: Stopping Load Kernel Modules...
Nov 24 04:45:16 np0005533252 systemd[1]: Starting Load Kernel Modules...
Nov 24 04:45:16 np0005533252 systemd[1]: Finished Load Kernel Modules.
Nov 24 04:45:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:16 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:17 np0005533252 podman[213347]: 2025-11-24 09:45:17.096322113 +0000 UTC m=+0.098928395 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 24 04:45:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:17.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:17 np0005533252 python3.9[213391]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 04:45:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:17 np0005533252 kernel: ganesha.nfsd[202979]: segfault at 50 ip 00007fe4b65f232e sp 00007fe46bffe210 error 4 in libntirpc.so.5.8[7fe4b65d7000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 24 04:45:17 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:45:17 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[188316]: 24/11/2025 09:45:17 : epoch 69242889 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3ec004590 fd 38 proxy ignored for local
Nov 24 04:45:17 np0005533252 systemd[1]: Started Process Core Dump (PID 213402/UID 0).
Nov 24 04:45:18 np0005533252 systemd-coredump[213403]: Process 188320 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007fe4b65f232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:45:18 np0005533252 systemd[1]: systemd-coredump@7-213402-0.service: Deactivated successfully.
Nov 24 04:45:18 np0005533252 systemd[1]: systemd-coredump@7-213402-0.service: Consumed 1.034s CPU time.
Nov 24 04:45:19 np0005533252 podman[213412]: 2025-11-24 09:45:19.018699609 +0000 UTC m=+0.024638870 container died 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:45:19 np0005533252 systemd[1]: var-lib-containers-storage-overlay-09197f8c744ecf42ac0ecb6298ca36537bb06006ed0f17d3b474315923d9a2aa-merged.mount: Deactivated successfully.
Nov 24 04:45:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:19 np0005533252 podman[213412]: 2025-11-24 09:45:19.056330978 +0000 UTC m=+0.062270219 container remove 5b49fd5439277bce674ba48f12338b0bfe0b639e80f257cc26609ecccc449494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:45:19 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:45:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:19 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:45:19 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.461s CPU time.
Nov 24 04:45:19 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:19 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:19 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:19.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:19 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:19 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:19 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.045 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:45:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.046 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:45:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:45:20.046 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:45:20 np0005533252 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 04:45:20 np0005533252 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 04:45:20 np0005533252 lvm[213565]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 04:45:20 np0005533252 lvm[213565]: VG ceph_vg0 finished
Nov 24 04:45:20 np0005533252 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 04:45:20 np0005533252 systemd[1]: Starting man-db-cache-update.service...
Nov 24 04:45:20 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:20 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:20 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:20 np0005533252 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 04:45:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:21.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:21 np0005533252 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 04:45:21 np0005533252 systemd[1]: Finished man-db-cache-update.service.
Nov 24 04:45:21 np0005533252 systemd[1]: man-db-cache-update.service: Consumed 1.545s CPU time.
Nov 24 04:45:21 np0005533252 systemd[1]: run-r91a6f304bc3f476eb1b8cd9da6dd4fc4.service: Deactivated successfully.
Nov 24 04:45:22 np0005533252 python3.9[214907]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:45:22 np0005533252 systemd[1]: Stopping Open-iSCSI...
Nov 24 04:45:22 np0005533252 iscsid[202946]: iscsid shutting down.
Nov 24 04:45:22 np0005533252 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 04:45:22 np0005533252 systemd[1]: Stopped Open-iSCSI.
Nov 24 04:45:22 np0005533252 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 04:45:22 np0005533252 systemd[1]: Starting Open-iSCSI...
Nov 24 04:45:22 np0005533252 systemd[1]: Started Open-iSCSI.
Nov 24 04:45:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094523 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:45:23 np0005533252 python3.9[215086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 04:45:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:25 np0005533252 python3.9[215243]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:26 np0005533252 python3.9[215396]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:45:26 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:26 np0005533252 podman[215398]: 2025-11-24 09:45:26.562152225 +0000 UTC m=+0.055644914 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 04:45:26 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:26 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:27 np0005533252 python3.9[215600]: ansible-ansible.builtin.service_facts Invoked
Nov 24 04:45:27 np0005533252 network[215617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 04:45:27 np0005533252 network[215618]: 'network-scripts' will be removed from distribution in near future.
Nov 24 04:45:27 np0005533252 network[215619]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 04:45:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:29.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:29 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 8.
Nov 24 04:45:29 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:45:29 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.461s CPU time.
Nov 24 04:45:29 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:45:29 np0005533252 podman[215726]: 2025-11-24 09:45:29.448807975 +0000 UTC m=+0.037989509 container create 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 04:45:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:29 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:45:29 np0005533252 podman[215726]: 2025-11-24 09:45:29.508011877 +0000 UTC m=+0.097193441 container init 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 04:45:29 np0005533252 podman[215726]: 2025-11-24 09:45:29.514794935 +0000 UTC m=+0.103976479 container start 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:45:29 np0005533252 bash[215726]: 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2
Nov 24 04:45:29 np0005533252 podman[215726]: 2025-11-24 09:45:29.43238684 +0000 UTC m=+0.021568384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:45:29 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:45:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:29 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:45:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:45:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:45:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:33 np0005533252 python3.9[215997]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:34 np0005533252 python3.9[216151]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:35 np0005533252 python3.9[216304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:35 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:45:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:35 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:45:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:36 np0005533252 python3.9[216457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:36 np0005533252 python3.9[216611]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:37 np0005533252 python3.9[216764]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:38 np0005533252 python3.9[216918]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:38 np0005533252 python3.9[217071]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:45:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:40 np0005533252 podman[217098]: 2025-11-24 09:45:40.318897279 +0000 UTC m=+0.053744419 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 04:45:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:41.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:41 np0005533252 python3.9[217246]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:45:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c74000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:41 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:41 np0005533252 python3.9[217410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:42 np0005533252 python3.9[217566]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:42 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c50000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:43 np0005533252 python3.9[217718]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:43.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:43 np0005533252 python3.9[217895]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:43 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094543 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:45:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:43 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:44 np0005533252 python3.9[218048]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:44 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:44 np0005533252 python3.9[218200]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:45 np0005533252 python3.9[218352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:45:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:45:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:45 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:45 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:45 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:46 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:45:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:47.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:45:47 np0005533252 podman[218505]: 2025-11-24 09:45:47.245568213 +0000 UTC m=+0.094584957 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:45:47 np0005533252 python3.9[218506]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:47 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c680021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:47 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:47 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:47 np0005533252 python3.9[218684]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:48 np0005533252 python3.9[218837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:48 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:49 np0005533252 python3.9[218989]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:49 np0005533252 python3.9[219141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:49 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:49 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:49 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:50 np0005533252 python3.9[219294]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:50 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c68002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:50 np0005533252 python3.9[219446]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:51 np0005533252 python3.9[219598]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:45:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:51 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:51 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:52 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c54001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:53 np0005533252 python3.9[219751]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:53.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:53 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c68002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:45:53 np0005533252 kernel: ganesha.nfsd[217413]: segfault at 50 ip 00007f5d1cff032e sp 00007f5cd17f9210 error 4 in libntirpc.so.5.8[7f5d1cfd5000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 04:45:53 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:45:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[215745]: 24/11/2025 09:45:53 : epoch 69242939 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5c48001fc0 fd 38 proxy ignored for local
Nov 24 04:45:53 np0005533252 python3.9[219903]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 04:45:53 np0005533252 systemd[1]: Started Process Core Dump (PID 219904/UID 0).
Nov 24 04:45:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:54 np0005533252 python3.9[220058]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:45:54 np0005533252 systemd[1]: Reloading.
Nov 24 04:45:54 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:45:54 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:45:55 np0005533252 systemd-coredump[219906]: Process 215750 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f5d1cff032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:45:55 np0005533252 systemd[1]: systemd-coredump@8-219904-0.service: Deactivated successfully.
Nov 24 04:45:55 np0005533252 systemd[1]: systemd-coredump@8-219904-0.service: Consumed 1.192s CPU time.
Nov 24 04:45:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:55 np0005533252 podman[220097]: 2025-11-24 09:45:55.196727137 +0000 UTC m=+0.033202332 container died 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 04:45:55 np0005533252 systemd[1]: var-lib-containers-storage-overlay-f61d1d105d001d2c150063a1844050105200d289a25968831de4a983fc687f8f-merged.mount: Deactivated successfully.
Nov 24 04:45:55 np0005533252 podman[220097]: 2025-11-24 09:45:55.234678036 +0000 UTC m=+0.071153221 container remove 7eaa88e9799040652a32b02563e55848c945b8ab0e74738ac38765c3cd6db8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 04:45:55 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:45:55 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:45:55 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.432s CPU time.
Nov 24 04:45:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:55 np0005533252 python3.9[220295]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:56 np0005533252 python3.9[220449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:57 np0005533252 podman[220574]: 2025-11-24 09:45:57.04784005 +0000 UTC m=+0.076110745 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 04:45:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:57 np0005533252 python3.9[220621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:57 np0005533252 python3.9[220774]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:58 np0005533252 python3.9[220928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:45:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:45:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:45:59 np0005533252 python3.9[221081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:45:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:45:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:45:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:45:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094559 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:46:00 np0005533252 python3.9[221235]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:46:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:46:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:46:00 np0005533252 python3.9[221388]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 04:46:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:01.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094601 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:46:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:02 np0005533252 python3.9[221542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:46:03 np0005533252 python3.9[221753]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:46:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:46:03 np0005533252 python3.9[221952]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:46:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:04 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:46:04 np0005533252 python3.9[222105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:05 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 9.
Nov 24 04:46:05 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:46:05 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.432s CPU time.
Nov 24 04:46:05 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:46:05 np0005533252 python3.9[222257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:05 np0005533252 podman[222329]: 2025-11-24 09:46:05.706458109 +0000 UTC m=+0.039882659 container create 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 04:46:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:46:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:46:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:46:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:46:05 np0005533252 podman[222329]: 2025-11-24 09:46:05.758395537 +0000 UTC m=+0.091820107 container init 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:46:05 np0005533252 podman[222329]: 2025-11-24 09:46:05.764624673 +0000 UTC m=+0.098049223 container start 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 04:46:05 np0005533252 bash[222329]: 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5
Nov 24 04:46:05 np0005533252 podman[222329]: 2025-11-24 09:46:05.688764866 +0000 UTC m=+0.022189436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:46:05 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:46:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:46:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:05 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:46:06 np0005533252 python3.9[222514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:06 np0005533252 python3.9[222666]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:07 np0005533252 python3.9[222818]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:07 np0005533252 python3.9[222970]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:46:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:46:08 np0005533252 python3.9[223123]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:46:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:11 np0005533252 podman[223174]: 2025-11-24 09:46:11.27143396 +0000 UTC m=+0.051208981 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 04:46:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:11 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:46:11 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:11 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:46:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:14 np0005533252 python3.9[223323]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 04:46:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 04:46:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:46:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:46:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:15 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:46:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:46:15 np0005533252 python3.9[223476]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 04:46:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:15.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:15.988710) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977575988762, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 251, "total_data_size": 2407403, "memory_usage": 2444480, "flush_reason": "Manual Compaction"}
Nov 24 04:46:15 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576001443, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1553648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19831, "largest_seqno": 20830, "table_properties": {"data_size": 1549113, "index_size": 2187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10009, "raw_average_key_size": 19, "raw_value_size": 1540015, "raw_average_value_size": 3013, "num_data_blocks": 98, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977498, "oldest_key_time": 1763977498, "file_creation_time": 1763977575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 12778 microseconds, and 4650 cpu microseconds.
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001495) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1553648 bytes OK
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001516) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003032) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003052) EVENT_LOG_v1 {"time_micros": 1763977576003048, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2402380, prev total WAL file size 2402380, number of live WAL files 2.
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1517KB)], [36(12MB)]
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576003861, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15136180, "oldest_snapshot_seqno": -1}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5007 keys, 12955061 bytes, temperature: kUnknown
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576077870, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12955061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12920672, "index_size": 20775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127779, "raw_average_key_size": 25, "raw_value_size": 12828912, "raw_average_value_size": 2562, "num_data_blocks": 851, "num_entries": 5007, "num_filter_entries": 5007, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.078075) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12955061 bytes
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.079227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.3 rd, 174.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(18.1) write-amplify(8.3) OK, records in: 5523, records dropped: 516 output_compression: NoCompression
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.079242) EVENT_LOG_v1 {"time_micros": 1763977576079235, "job": 20, "event": "compaction_finished", "compaction_time_micros": 74073, "compaction_time_cpu_micros": 24410, "output_level": 6, "num_output_files": 1, "total_output_size": 12955061, "num_input_records": 5523, "num_output_records": 5007, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576079672, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576082256, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.003752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:46:16.082381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:46:16 np0005533252 python3.9[223635]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 04:46:16 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:46:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:17.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:46:18 np0005533252 systemd-logind[823]: New session 54 of user zuul.
Nov 24 04:46:18 np0005533252 systemd[1]: Started Session 54 of User zuul.
Nov 24 04:46:18 np0005533252 podman[223684]: 2025-11-24 09:46:18.350848038 +0000 UTC m=+0.084976936 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 04:46:18 np0005533252 systemd[1]: session-54.scope: Deactivated successfully.
Nov 24 04:46:18 np0005533252 systemd-logind[823]: Session 54 logged out. Waiting for processes to exit.
Nov 24 04:46:18 np0005533252 systemd-logind[823]: Removed session 54.
Nov 24 04:46:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:18 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:19 np0005533252 python3.9[223866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:19 np0005533252 python3.9[223987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977578.633077-3434-241421181591975/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:19 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:19 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:19 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.047 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:46:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:46:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:46:20.048 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:46:20 np0005533252 python3.9[224138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:20 np0005533252 python3.9[224214]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:20 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094620 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:46:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/094620 (4) : backend 'backend' has no server available!
Nov 24 04:46:21 np0005533252 python3.9[224364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094621 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:46:21 np0005533252 python3.9[224485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977580.6456141-3434-197341046069838/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:21 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094621 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:46:21 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:21 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:22 np0005533252 python3.9[224636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:22 np0005533252 python3.9[224757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977581.7029486-3434-197795321611245/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:22 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:23 np0005533252 python3.9[224907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:23 np0005533252 python3.9[225028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977582.7327955-3434-97429587186870/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:46:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:23.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:46:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:23 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:23 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:23 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:24 np0005533252 python3.9[225204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:24 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:24 np0005533252 python3.9[225325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977583.7848728-3434-17790481739851/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:25 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:25 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:25 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:26 np0005533252 python3.9[225478]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:46:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:26 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:27 np0005533252 podman[225602]: 2025-11-24 09:46:27.260388423 +0000 UTC m=+0.057466068 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 04:46:27 np0005533252 python3.9[225649]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:46:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:27 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:27 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:27 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:28 np0005533252 python3.9[225802]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:46:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:28 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:28 np0005533252 python3.9[225954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:29.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:29 np0005533252 python3.9[226077]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763977588.4558976-3755-33318972044668/.source _original_basename=.z_tnyq5w follow=False checksum=f245e5d71a28845d8f9ab8777612e6084ea6ae5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 04:46:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:29 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:29 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:29 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:30 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:46:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:46:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:46:30 np0005533252 python3.9[226230]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:46:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:30 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:31 np0005533252 python3.9[226382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094631 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:46:31 np0005533252 python3.9[226503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977590.9191883-3833-175940409423554/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:31 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:31 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:31 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:32 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:32 np0005533252 python3.9[226654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 04:46:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:46:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:46:33 np0005533252 python3.9[226775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977592.212074-3878-70969304307777/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 04:46:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:46:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 24 04:46:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:33 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:33 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c940095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:34 np0005533252 python3.9[226928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 04:46:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:34 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:35.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:35 np0005533252 python3.9[227080]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 04:46:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:35 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:35 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:35 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:36 np0005533252 python3[227233]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 04:46:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:36 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:46:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:37 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:37 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:38 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c78003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:39 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:39 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:39 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:40 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:46:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:40 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:41.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:46:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:41 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:41 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:42 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c6c001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:43 np0005533252 podman[227289]: 2025-11-24 09:46:43.120060452 +0000 UTC m=+0.852095031 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:46:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:43.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:43 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c70004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:46:43 np0005533252 kernel: ganesha.nfsd[223676]: segfault at 50 ip 00007f1d42f0e32e sp 00007f1d07ffe210 error 4 in libntirpc.so.5.8[7f1d42ef3000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 04:46:43 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:46:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[222390]: 24/11/2025 09:46:43 : epoch 6924295d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840037a0 fd 39 proxy ignored for local
Nov 24 04:46:43 np0005533252 systemd[1]: Started Process Core Dump (PID 227352/UID 0).
Nov 24 04:46:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:46:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:46:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:46 np0005533252 systemd-coredump[227353]: Process 222400 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007f1d42f0e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:46:46 np0005533252 systemd[1]: systemd-coredump@9-227352-0.service: Deactivated successfully.
Nov 24 04:46:46 np0005533252 systemd[1]: systemd-coredump@9-227352-0.service: Consumed 1.134s CPU time.
Nov 24 04:46:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:47 np0005533252 podman[227363]: 2025-11-24 09:46:47.261389991 +0000 UTC m=+0.574293543 container died 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 04:46:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:47.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094648 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:46:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:50 np0005533252 systemd[1]: var-lib-containers-storage-overlay-6b2de1769c175f32e837a1c6e5836b2db53c7621b3262e7d6f7d0028b6faa381-merged.mount: Deactivated successfully.
Nov 24 04:46:50 np0005533252 podman[227363]: 2025-11-24 09:46:50.561183085 +0000 UTC m=+3.874086617 container remove 8bf9bf8a7bbfa60aa69b766603edc552d95a53eeb4b60bc9ac8acd9912952fb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:46:50 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:46:50 np0005533252 podman[227245]: 2025-11-24 09:46:50.598740303 +0000 UTC m=+14.157810848 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 04:46:50 np0005533252 podman[227383]: 2025-11-24 09:46:50.636248872 +0000 UTC m=+1.368622089 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:46:50 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:46:50 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.335s CPU time.
Nov 24 04:46:50 np0005533252 podman[227460]: 2025-11-24 09:46:50.759033603 +0000 UTC m=+0.045890599 container create fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:46:50 np0005533252 podman[227460]: 2025-11-24 09:46:50.736418307 +0000 UTC m=+0.023275323 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 04:46:50 np0005533252 python3[227233]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 04:46:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:51 np0005533252 python3.9[227652]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:46:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:46:52 np0005533252 python3.9[227807]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 04:46:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 04:46:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 04:46:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:54 np0005533252 python3.9[227960]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 04:46:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:55.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094655 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:46:55 np0005533252 python3[228112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 04:46:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:56 np0005533252 podman[228149]: 2025-11-24 09:46:56.06272671 +0000 UTC m=+0.059989211 container create 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 04:46:56 np0005533252 podman[228149]: 2025-11-24 09:46:56.034078773 +0000 UTC m=+0.031341284 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 04:46:56 np0005533252 python3[228112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 04:46:57 np0005533252 python3.9[228341]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:46:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:57 np0005533252 podman[228467]: 2025-11-24 09:46:57.717835912 +0000 UTC m=+0.060574676 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 04:46:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:46:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:57.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:46:57 np0005533252 python3.9[228514]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:46:58 np0005533252 python3.9[228666]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977617.9770048-4154-184015674217826/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 04:46:58 np0005533252 python3.9[228742]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 04:46:58 np0005533252 systemd[1]: Reloading.
Nov 24 04:46:59 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:46:59 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:46:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:46:59 np0005533252 python3.9[228853]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 04:46:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:46:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:46:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:59.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:46:59 np0005533252 systemd[1]: Reloading.
Nov 24 04:47:00 np0005533252 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 04:47:00 np0005533252 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 04:47:00 np0005533252 systemd[1]: Starting nova_compute container...
Nov 24 04:47:00 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:47:00 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:00 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:00 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:00 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:00 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:00 np0005533252 podman[228896]: 2025-11-24 09:47:00.378565023 +0000 UTC m=+0.111622092 container init 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 04:47:00 np0005533252 podman[228896]: 2025-11-24 09:47:00.384583574 +0000 UTC m=+0.117640613 container start 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 04:47:00 np0005533252 podman[228896]: nova_compute
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + sudo -E kolla_set_configs
Nov 24 04:47:00 np0005533252 systemd[1]: Started nova_compute container.
Nov 24 04:47:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:47:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Validating config file
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying service configuration files
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Deleting /etc/ceph
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Creating directory /etc/ceph
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Writing out command to execute
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:00 np0005533252 nova_compute[228912]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 04:47:00 np0005533252 nova_compute[228912]: ++ cat /run_command
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + CMD=nova-compute
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + ARGS=
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + sudo kolla_copy_cacerts
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + [[ ! -n '' ]]
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + . kolla_extend_start
Nov 24 04:47:00 np0005533252 nova_compute[228912]: Running command: 'nova-compute'
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + umask 0022
Nov 24 04:47:00 np0005533252 nova_compute[228912]: + exec nova-compute
Nov 24 04:47:00 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 10.
Nov 24 04:47:00 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:47:00 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.335s CPU time.
Nov 24 04:47:00 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:47:00 np0005533252 podman[228997]: 2025-11-24 09:47:00.96905602 +0000 UTC m=+0.038703259 container create 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:47:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a4b1d9e246d85aca9deb7a685b356722e725e07097b58faac36c6d269f9e1d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:01 np0005533252 podman[228997]: 2025-11-24 09:47:01.020986839 +0000 UTC m=+0.090634128 container init 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 04:47:01 np0005533252 podman[228997]: 2025-11-24 09:47:01.029927392 +0000 UTC m=+0.099574641 container start 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 04:47:01 np0005533252 bash[228997]: 43fd0b496718b6eeeae7d88ea5f91542ae91f5585f1476ce3ea629e7cd469e22
Nov 24 04:47:01 np0005533252 podman[228997]: 2025-11-24 09:47:00.950870196 +0000 UTC m=+0.020517465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:47:01 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:47:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:01.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:01 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:47:01 np0005533252 python3.9[229180]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:47:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:01.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:02 np0005533252 python3.9[229331]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:47:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:47:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3841 writes, 21K keys, 3841 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 3841 writes, 3841 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1374 writes, 6698 keys, 1374 commit groups, 1.0 writes per commit group, ingest: 16.11 MB, 0.03 MB/s#012Interval WAL: 1374 writes, 1374 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    132.9      0.25              0.07        10    0.025       0      0       0.0       0.0#012  L6      1/0   12.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    140.2    118.9      0.96              0.26         9    0.106     44K   4819       0.0       0.0#012 Sum      1/0   12.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    111.5    121.7      1.21              0.33        19    0.063     44K   4819       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.4    120.5    120.5      0.53              0.16         8    0.066     22K   2563       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    140.2    118.9      0.96              0.26         9    0.106     44K   4819       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    133.8      0.24              0.07         9    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 1.2 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 8.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(480,8.12 MB,2.6726%) FilterBlock(19,131.05 KB,0.0420972%) IndexBlock(19,252.30 KB,0.0810473%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 04:47:02 np0005533252 nova_compute[228912]: 2025-11-24 09:47:02.914 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:02 np0005533252 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:02 np0005533252 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:02 np0005533252 nova_compute[228912]: 2025-11-24 09:47:02.915 228916 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.065 228916 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.082 228916 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.082 228916 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 24 04:47:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.552 228916 INFO nova.virt.driver [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 24 04:47:03 np0005533252 python3.9[229485]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.728 228916 INFO nova.compute.provider_config [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.760 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.761 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.762 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.763 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.764 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.765 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.766 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.767 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.768 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.769 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.770 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.771 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.772 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.773 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.774 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.775 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.776 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.777 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.778 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.779 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.780 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.781 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.782 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.783 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.784 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.785 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.786 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.787 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.788 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.789 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.790 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.791 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.792 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.793 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.794 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.795 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.796 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.797 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.798 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.799 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.800 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.801 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.802 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.803 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.804 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.805 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.806 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.807 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.808 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.809 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.810 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.811 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.812 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.813 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.814 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.815 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.816 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.817 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.818 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.819 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.820 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.821 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.822 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.823 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.824 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.825 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.826 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.827 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.828 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.829 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.830 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.831 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.832 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.833 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.834 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.835 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.836 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.837 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.838 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.839 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.840 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.841 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.842 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.843 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 WARNING oslo_config.cfg [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 04:47:03 np0005533252 nova_compute[228912]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 04:47:03 np0005533252 nova_compute[228912]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 04:47:03 np0005533252 nova_compute[228912]: and ``live_migration_inbound_addr`` respectively.
Nov 24 04:47:03 np0005533252 nova_compute[228912]: ).  Its value may be silently ignored in the future.#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.844 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.845 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.846 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.847 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.848 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.849 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.850 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.851 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.852 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.853 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.854 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.855 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.856 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.857 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.858 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.859 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.860 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.861 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.862 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.863 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.864 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.865 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.866 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.867 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.868 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.869 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.870 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.871 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.872 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.873 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.874 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.875 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.876 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.877 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.878 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:47:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:03.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.879 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.880 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.881 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.882 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.883 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.884 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.885 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.886 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.887 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.888 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.889 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.890 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.891 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.892 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.893 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.894 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.895 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.896 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.897 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.898 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.899 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.900 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.901 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.902 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.903 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.904 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.905 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.906 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.907 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.908 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.909 228916 DEBUG oslo_service.service [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.910 228916 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.929 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.930 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 24 04:47:03 np0005533252 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 04:47:03 np0005533252 systemd[1]: Started libvirt QEMU daemon.
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.990 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f694a0d07c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.992 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f694a0d07c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 24 04:47:03 np0005533252 nova_compute[228912]: 2025-11-24 09:47:03.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.010 228916 WARNING nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.010 228916 DEBUG nova.virt.libvirt.volume.mount [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 24 04:47:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:47:04 np0005533252 python3.9[229715]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 04:47:04 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:47:04 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.799 228916 INFO nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <host>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <uuid>719139db-46ba-4050-a77b-5fa732a73807</uuid>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <arch>x86_64</arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model>EPYC-Rome-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <vendor>AMD</vendor>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <microcode version='16777317'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <signature family='23' model='49' stepping='0'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='x2apic'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='tsc-deadline'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='osxsave'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='hypervisor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='tsc_adjust'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='spec-ctrl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='stibp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='arch-capabilities'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='cmp_legacy'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='topoext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='virt-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='lbrv'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='tsc-scale'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='vmcb-clean'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='pause-filter'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='pfthreshold'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='svme-addr-chk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='rdctl-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='skip-l1dfl-vmentry'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='mds-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature name='pschange-mc-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <pages unit='KiB' size='4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <pages unit='KiB' size='2048'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <pages unit='KiB' size='1048576'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <power_management>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <suspend_mem/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </power_management>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <iommu support='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <migration_features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <live/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <uri_transports>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <uri_transport>tcp</uri_transport>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <uri_transport>rdma</uri_transport>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </uri_transports>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </migration_features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <topology>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <cells num='1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <cell id='0'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <memory unit='KiB'>7864320</memory>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <pages unit='KiB' size='2048'>0</pages>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <distances>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <sibling id='0' value='10'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          </distances>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          <cpus num='8'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:          </cpus>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        </cell>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </cells>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </topology>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <cache>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </cache>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <secmodel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model>selinux</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <doi>0</doi>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </secmodel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <secmodel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model>dac</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <doi>0</doi>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </secmodel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </host>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <guest>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <os_type>hvm</os_type>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <arch name='i686'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <wordsize>32</wordsize>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <domain type='qemu'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <domain type='kvm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <pae/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <nonpae/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <acpi default='on' toggle='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <apic default='on' toggle='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <cpuselection/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <deviceboot/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <disksnapshot default='on' toggle='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <externalSnapshot/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </guest>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <guest>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <os_type>hvm</os_type>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <arch name='x86_64'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <wordsize>64</wordsize>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <domain type='qemu'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <domain type='kvm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <acpi default='on' toggle='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <apic default='on' toggle='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <cpuselection/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <deviceboot/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <disksnapshot default='on' toggle='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <externalSnapshot/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </guest>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 
Nov 24 04:47:04 np0005533252 nova_compute[228912]: </capabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: #033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.804 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.827 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 04:47:04 np0005533252 nova_compute[228912]: <domainCapabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <domain>kvm</domain>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <arch>i686</arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <vcpu max='240'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <iothreads supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <os supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <enum name='firmware'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <loader supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>rom</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pflash</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='readonly'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>yes</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='secure'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </loader>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </os>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='maximumMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <vendor>AMD</vendor>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='succor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='custom' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-128'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-256'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-512'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='athlon'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='athlon-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='core2duo'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='core2duo-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='coreduo'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='coreduo-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='n270'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='n270-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='phenom'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='phenom-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <memoryBacking supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <enum name='sourceType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>file</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>anonymous</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>memfd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </memoryBacking>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <devices>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <disk supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='diskDevice'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>disk</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>cdrom</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>floppy</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>lun</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>ide</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>fdc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>sata</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </disk>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <graphics supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vnc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>egl-headless</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </graphics>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <video supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='modelType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vga</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>cirrus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>none</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>bochs</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>ramfb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </video>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <hostdev supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='mode'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>subsystem</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='startupPolicy'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>mandatory</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>requisite</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>optional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='subsysType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pci</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='capsType'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='pciBackend'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </hostdev>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <rng supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>random</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>egd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </rng>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <filesystem supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='driverType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>path</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>handle</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtiofs</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </filesystem>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <tpm supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tpm-tis</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tpm-crb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>emulator</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>external</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendVersion'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>2.0</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </tpm>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <redirdev supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </redirdev>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <channel supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </channel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <crypto supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>qemu</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </crypto>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <interface supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>passt</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </interface>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <panic supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>isa</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>hyperv</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </panic>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <console supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>null</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dev</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>file</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pipe</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>stdio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>udp</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tcp</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>qemu-vdagent</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </console>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </devices>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <gic supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <genid supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <backup supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <async-teardown supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <ps2 supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <sev supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <sgx supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <hyperv supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='features'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>relaxed</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vapic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>spinlocks</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vpindex</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>runtime</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>synic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>stimer</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>reset</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vendor_id</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>frequencies</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>reenlightenment</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tlbflush</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>ipi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>avic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>emsr_bitmap</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>xmm_input</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <defaults>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </defaults>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </hyperv>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <launchSecurity supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='sectype'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tdx</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </launchSecurity>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: </domainCapabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.832 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 04:47:04 np0005533252 nova_compute[228912]: <domainCapabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <domain>kvm</domain>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <arch>i686</arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <vcpu max='4096'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <iothreads supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <os supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <enum name='firmware'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <loader supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>rom</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pflash</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='readonly'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>yes</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='secure'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </loader>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </os>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='maximumMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <vendor>AMD</vendor>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='succor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='custom' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-128'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-256'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-512'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='athlon'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='athlon-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='core2duo'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='core2duo-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='coreduo'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='coreduo-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='n270'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='n270-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='phenom'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='phenom-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <memoryBacking supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <enum name='sourceType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>file</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>anonymous</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>memfd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </memoryBacking>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <devices>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <disk supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='diskDevice'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>disk</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>cdrom</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>floppy</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>lun</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>fdc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>sata</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </disk>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <graphics supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vnc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>egl-headless</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </graphics>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <video supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='modelType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vga</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>cirrus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>none</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>bochs</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>ramfb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </video>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <hostdev supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='mode'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>subsystem</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='startupPolicy'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>mandatory</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>requisite</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>optional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='subsysType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pci</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='capsType'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='pciBackend'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </hostdev>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <rng supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>random</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>egd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </rng>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <filesystem supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='driverType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>path</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>handle</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>virtiofs</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </filesystem>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <tpm supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tpm-tis</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tpm-crb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>emulator</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>external</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendVersion'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>2.0</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </tpm>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <redirdev supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </redirdev>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <channel supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </channel>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <crypto supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>qemu</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </crypto>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <interface supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='backendType'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>passt</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </interface>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <panic supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>isa</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>hyperv</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </panic>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <console supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>null</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vc</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dev</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>file</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pipe</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>stdio</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>udp</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tcp</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>qemu-vdagent</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </console>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </devices>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <gic supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <genid supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <backup supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <async-teardown supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <ps2 supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <sev supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <sgx supported='no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <hyperv supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='features'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>relaxed</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vapic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>spinlocks</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vpindex</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>runtime</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>synic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>stimer</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>reset</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>vendor_id</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>frequencies</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>reenlightenment</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tlbflush</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>ipi</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>avic</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>emsr_bitmap</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>xmm_input</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <defaults>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </defaults>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </hyperv>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <launchSecurity supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='sectype'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>tdx</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </launchSecurity>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </features>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: </domainCapabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.858 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 04:47:04 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.862 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 04:47:04 np0005533252 nova_compute[228912]: <domainCapabilities>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <domain>kvm</domain>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <arch>x86_64</arch>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <vcpu max='240'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <iothreads supported='yes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <os supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <enum name='firmware'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <loader supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>rom</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>pflash</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='readonly'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>yes</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='secure'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </loader>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  </os>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:  <cpu>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <enum name='maximumMigratable'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <vendor>AMD</vendor>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='succor'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:    <mode name='custom' supported='yes'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-128'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-256'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx10-512'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:04 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='athlon'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='athlon-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='core2duo'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='core2duo-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='coreduo'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='coreduo-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='n270'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='n270-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='phenom'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='phenom-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </cpu>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <memoryBacking supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <enum name='sourceType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>file</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>anonymous</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>memfd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </memoryBacking>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <devices>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <disk supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='diskDevice'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>disk</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>cdrom</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>floppy</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>lun</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>ide</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>fdc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>sata</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </disk>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <graphics supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vnc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>egl-headless</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </graphics>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <video supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='modelType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vga</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>cirrus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>none</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>bochs</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>ramfb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </video>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <hostdev supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='mode'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>subsystem</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='startupPolicy'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>mandatory</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>requisite</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>optional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='subsysType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pci</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='capsType'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='pciBackend'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </hostdev>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <rng supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>random</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>egd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </rng>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <filesystem supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='driverType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>path</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>handle</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtiofs</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </filesystem>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <tpm supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tpm-tis</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tpm-crb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>emulator</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>external</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendVersion'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>2.0</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </tpm>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <redirdev supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </redirdev>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <channel supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </channel>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <crypto supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>qemu</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </crypto>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <interface supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>passt</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </interface>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <panic supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>isa</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>hyperv</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </panic>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <console supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>null</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dev</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>file</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pipe</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>stdio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>udp</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tcp</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>qemu-vdagent</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </console>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </devices>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <features>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <gic supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <genid supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <backup supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <async-teardown supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <ps2 supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <sev supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <sgx supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <hyperv supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='features'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>relaxed</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vapic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>spinlocks</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vpindex</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>runtime</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>synic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>stimer</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>reset</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vendor_id</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>frequencies</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>reenlightenment</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tlbflush</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>ipi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>avic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>emsr_bitmap</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>xmm_input</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <defaults>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </defaults>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </hyperv>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <launchSecurity supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='sectype'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tdx</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </launchSecurity>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </features>
Nov 24 04:47:05 np0005533252 nova_compute[228912]: </domainCapabilities>
Nov 24 04:47:05 np0005533252 nova_compute[228912]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.921 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 04:47:05 np0005533252 nova_compute[228912]: <domainCapabilities>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <domain>kvm</domain>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <arch>x86_64</arch>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <vcpu max='4096'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <iothreads supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <os supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <enum name='firmware'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>efi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <loader supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>rom</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pflash</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='readonly'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>yes</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='secure'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>yes</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>no</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </loader>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </os>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <cpu>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='maximumMigratable'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>on</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>off</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <vendor>AMD</vendor>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='succor'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <mode name='custom' supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Denverton'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Denverton-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='auto-ibrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amd-psfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='stibp-always-on'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='EPYC-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx10'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx10-128'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx10-256'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx10-512'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='prefetchiti'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Haswell-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512er'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512pf'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fma4'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tbm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xop'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='amx-tile'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-bf16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-fp16'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bitalg'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrc'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fzrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='la57'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='taa-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xfd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SierraForest'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-ifma'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cmpccxadd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fbsdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='fsrs'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ibrs-all'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mcdt-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pbrsb-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='psdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='serialize'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vaes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='hle'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='rtm'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512bw'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512cd'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512dq'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512f'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='avx512vl'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='invpcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pcid'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='pku'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='mpx'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='core-capability'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='split-lock-detect'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='cldemote'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='erms'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='gfni'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdir64b'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='movdiri'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='xsaves'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='athlon'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='athlon-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='core2duo'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='core2duo-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='coreduo'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='coreduo-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='n270'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='n270-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='ss'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='phenom'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <blockers model='phenom-v1'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnow'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <feature name='3dnowext'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </blockers>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </mode>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </cpu>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <memoryBacking supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <enum name='sourceType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>file</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>anonymous</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <value>memfd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </memoryBacking>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <devices>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <disk supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='diskDevice'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>disk</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>cdrom</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>floppy</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>lun</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>fdc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>sata</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </disk>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <graphics supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vnc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>egl-headless</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </graphics>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <video supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='modelType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vga</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>cirrus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>none</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>bochs</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>ramfb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </video>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <hostdev supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='mode'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>subsystem</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='startupPolicy'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>mandatory</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>requisite</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>optional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='subsysType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pci</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>scsi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='capsType'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='pciBackend'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </hostdev>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <rng supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtio-non-transitional</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>random</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>egd</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </rng>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <filesystem supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='driverType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>path</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>handle</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>virtiofs</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </filesystem>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <tpm supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tpm-tis</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tpm-crb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>emulator</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>external</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendVersion'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>2.0</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </tpm>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <redirdev supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='bus'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>usb</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </redirdev>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <channel supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </channel>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <crypto supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>qemu</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendModel'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>builtin</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </crypto>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <interface supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='backendType'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>default</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>passt</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </interface>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <panic supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='model'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>isa</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>hyperv</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </panic>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <console supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='type'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>null</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vc</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pty</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dev</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>file</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>pipe</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>stdio</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>udp</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tcp</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>unix</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>qemu-vdagent</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>dbus</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </console>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </devices>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  <features>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <gic supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <genid supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <backup supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <async-teardown supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <ps2 supported='yes'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <sev supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <sgx supported='no'/>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <hyperv supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='features'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>relaxed</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vapic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>spinlocks</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vpindex</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>runtime</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>synic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>stimer</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>reset</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>vendor_id</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>frequencies</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>reenlightenment</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tlbflush</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>ipi</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>avic</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>emsr_bitmap</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>xmm_input</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <defaults>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </defaults>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </hyperv>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    <launchSecurity supported='yes'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      <enum name='sectype'>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:        <value>tdx</value>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:      </enum>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:    </launchSecurity>
Nov 24 04:47:05 np0005533252 nova_compute[228912]:  </features>
Nov 24 04:47:05 np0005533252 nova_compute[228912]: </domainCapabilities>
Nov 24 04:47:05 np0005533252 nova_compute[228912]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 DEBUG nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.991 228916 INFO nova.virt.libvirt.host [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Secure Boot support detected#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:04.993 228916 INFO nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.019 228916 DEBUG nova.virt.libvirt.driver [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.035 228916 INFO nova.virt.node [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Determined node identity 1b7b0f22-dba8-42a8-9de3-763c9152946e from /var/lib/nova/compute_id#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.047 228916 WARNING nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Compute nodes ['1b7b0f22-dba8-42a8-9de3-763c9152946e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.066 228916 INFO nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.099 228916 WARNING nova.compute.manager [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.100 228916 DEBUG oslo_concurrency.lockutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.101 228916 DEBUG nova.compute.resource_tracker [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.101 228916 DEBUG oslo_concurrency.processutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:47:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:05.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:47:05 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331233881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:47:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:47:05 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190925437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.557 228916 DEBUG oslo_concurrency.processutils [None req-3ac55ac5-b4ff-454c-9333-47dde30e48fd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:47:05 np0005533252 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 04:47:05 np0005533252 systemd[1]: Started libvirt nodedev daemon.
Nov 24 04:47:05 np0005533252 python3.9[229922]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 04:47:05 np0005533252 systemd[1]: Stopping nova_compute container...
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.798 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.799 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:47:05 np0005533252 nova_compute[228912]: 2025-11-24 09:47:05.799 228916 DEBUG oslo_concurrency.lockutils [None req-3e5cc141-ad1e-406e-add2-4265cad0ec91 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:47:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:05.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:06 np0005533252 virtqemud[229578]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 04:47:06 np0005533252 virtqemud[229578]: hostname: compute-1
Nov 24 04:47:06 np0005533252 virtqemud[229578]: End of file while reading data: Input/output error
Nov 24 04:47:06 np0005533252 systemd[1]: libpod-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463.scope: Deactivated successfully.
Nov 24 04:47:06 np0005533252 systemd[1]: libpod-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463.scope: Consumed 3.657s CPU time.
Nov 24 04:47:06 np0005533252 podman[229949]: 2025-11-24 09:47:06.238694217 +0000 UTC m=+0.478463657 container died 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:47:06 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463-userdata-shm.mount: Deactivated successfully.
Nov 24 04:47:06 np0005533252 systemd[1]: var-lib-containers-storage-overlay-998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21-merged.mount: Deactivated successfully.
Nov 24 04:47:06 np0005533252 podman[229949]: 2025-11-24 09:47:06.348063422 +0000 UTC m=+0.587832842 container cleanup 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:47:06 np0005533252 podman[229949]: nova_compute
Nov 24 04:47:06 np0005533252 podman[229982]: nova_compute
Nov 24 04:47:06 np0005533252 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 04:47:06 np0005533252 systemd[1]: Stopped nova_compute container.
Nov 24 04:47:06 np0005533252 systemd[1]: Starting nova_compute container...
Nov 24 04:47:06 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:47:06 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:06 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:06 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:06 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:06 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998428069ec542116c2095d13a6eac80a571eded75fdf90504711c791be6bc21/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:06 np0005533252 podman[229994]: 2025-11-24 09:47:06.501024007 +0000 UTC m=+0.072852083 container init 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 24 04:47:06 np0005533252 podman[229994]: 2025-11-24 09:47:06.510810532 +0000 UTC m=+0.082638598 container start 4f12c09c2b5a5f528cb4999d6b76c38bdb5027d103adcf8bdc114c4275996463 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + sudo -E kolla_set_configs
Nov 24 04:47:06 np0005533252 podman[229994]: nova_compute
Nov 24 04:47:06 np0005533252 systemd[1]: Started nova_compute container.
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Validating config file
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying service configuration files
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /etc/ceph
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Creating directory /etc/ceph
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Writing out command to execute
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:06 np0005533252 nova_compute[230010]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 04:47:06 np0005533252 nova_compute[230010]: ++ cat /run_command
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + CMD=nova-compute
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + ARGS=
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + sudo kolla_copy_cacerts
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + [[ ! -n '' ]]
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + . kolla_extend_start
Nov 24 04:47:06 np0005533252 nova_compute[230010]: Running command: 'nova-compute'
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + umask 0022
Nov 24 04:47:06 np0005533252 nova_compute[230010]: + exec nova-compute
Nov 24 04:47:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:07 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:47:07 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[229012]: 24/11/2025 09:47:07 : epoch 69242995 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:47:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:08 np0005533252 python3.9[230175]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 04:47:08 np0005533252 systemd[1]: Started libpod-conmon-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope.
Nov 24 04:47:08 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:47:08 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:08 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:08 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 04:47:08 np0005533252 podman[230200]: 2025-11-24 09:47:08.376089129 +0000 UTC m=+0.116083234 container init fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 04:47:08 np0005533252 podman[230200]: 2025-11-24 09:47:08.384645843 +0000 UTC m=+0.124639928 container start fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:47:08 np0005533252 python3.9[230175]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 04:47:08 np0005533252 nova_compute_init[230221]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 04:47:08 np0005533252 systemd[1]: libpod-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope: Deactivated successfully.
Nov 24 04:47:08 np0005533252 podman[230234]: 2025-11-24 09:47:08.493361753 +0000 UTC m=+0.032628667 container died fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 04:47:08 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9-userdata-shm.mount: Deactivated successfully.
Nov 24 04:47:08 np0005533252 systemd[1]: var-lib-containers-storage-overlay-73a160e91f30301b4dc4cb2eee53a414fa47f33567beea5b3af9dd52a065dbae-merged.mount: Deactivated successfully.
Nov 24 04:47:08 np0005533252 podman[230234]: 2025-11-24 09:47:08.524953333 +0000 UTC m=+0.064220247 container cleanup fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 04:47:08 np0005533252 systemd[1]: libpod-conmon-fe9899ea690749a9a0b3b5d5bfc012192a73e99876f03d91fe6d6c78aff266e9.scope: Deactivated successfully.
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.576 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.577 230014 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.728 230014 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.754 230014 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:47:08 np0005533252 nova_compute[230010]: 2025-11-24 09:47:08.754 230014 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 24 04:47:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:47:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:47:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:47:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:47:09 np0005533252 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 04:47:09 np0005533252 systemd[1]: session-53.scope: Consumed 2min 13.766s CPU time.
Nov 24 04:47:09 np0005533252 systemd-logind[823]: Session 53 logged out. Waiting for processes to exit.
Nov 24 04:47:09 np0005533252 systemd-logind[823]: Removed session 53.
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.209 230014 INFO nova.virt.driver [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 24 04:47:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:47:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:47:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:47:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.303 230014 INFO nova.compute.provider_config [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_concurrency.lockutils [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.309 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.310 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.311 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.312 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.313 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.314 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.315 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.316 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.317 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.318 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.319 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.320 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.321 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.322 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.323 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.324 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.325 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.326 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.327 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.328 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.329 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.330 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.331 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.332 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.333 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.334 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.335 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.336 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.337 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.338 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.339 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.340 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.341 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.342 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.343 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.344 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.345 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.346 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.347 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.348 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.349 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.350 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.351 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.352 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.353 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.354 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.355 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.356 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.357 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.358 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.359 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.360 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.361 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.362 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.363 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.364 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.365 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.366 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.367 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.368 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.369 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.370 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.371 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.372 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.373 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.374 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.375 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.376 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.377 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.378 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.379 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.380 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.381 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.382 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.383 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.384 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 WARNING oslo_config.cfg [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 04:47:09 np0005533252 nova_compute[230010]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 04:47:09 np0005533252 nova_compute[230010]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 04:47:09 np0005533252 nova_compute[230010]: and ``live_migration_inbound_addr`` respectively.
Nov 24 04:47:09 np0005533252 nova_compute[230010]: ).  Its value may be silently ignored in the future.#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.385 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.386 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.387 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.388 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.389 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.390 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.391 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.392 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.393 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.394 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.395 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.396 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.397 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.398 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.399 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.400 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.401 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.402 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.403 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.404 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.405 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.406 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.407 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.408 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.409 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.410 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.411 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.412 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.413 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.414 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.415 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.416 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.417 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.418 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.419 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.420 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.421 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.422 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.423 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.424 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.425 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.426 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.427 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.428 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.429 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.430 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.431 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.432 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.433 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.434 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.435 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.436 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.437 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.438 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.439 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.440 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.441 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.442 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.443 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.444 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.445 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.446 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.447 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.448 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.449 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 DEBUG oslo_service.service [None req-c42bb72a-1696-4788-a2fb-440d2bae85d1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.450 230014 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.463 230014 INFO nova.virt.node [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Determined node identity 1b7b0f22-dba8-42a8-9de3-763c9152946e from /var/lib/nova/compute_id#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.464 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.465 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.477 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4597019b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.479 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4597019b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.480 230014 INFO nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.488 230014 INFO nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <host>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <uuid>719139db-46ba-4050-a77b-5fa732a73807</uuid>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <arch>x86_64</arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model>EPYC-Rome-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <vendor>AMD</vendor>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <microcode version='16777317'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <signature family='23' model='49' stepping='0'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='x2apic'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='tsc-deadline'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='osxsave'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='hypervisor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='tsc_adjust'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='spec-ctrl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='stibp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='arch-capabilities'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='cmp_legacy'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='topoext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='virt-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='lbrv'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='tsc-scale'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='vmcb-clean'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='pause-filter'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='pfthreshold'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='svme-addr-chk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='rdctl-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='skip-l1dfl-vmentry'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='mds-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature name='pschange-mc-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <pages unit='KiB' size='4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <pages unit='KiB' size='2048'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <pages unit='KiB' size='1048576'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <power_management>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <suspend_mem/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </power_management>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <iommu support='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <migration_features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <live/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <uri_transports>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <uri_transport>tcp</uri_transport>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <uri_transport>rdma</uri_transport>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </uri_transports>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </migration_features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <topology>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <cells num='1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <cell id='0'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <memory unit='KiB'>7864320</memory>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <pages unit='KiB' size='2048'>0</pages>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <distances>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <sibling id='0' value='10'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          </distances>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          <cpus num='8'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:          </cpus>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        </cell>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </cells>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </topology>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <cache>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </cache>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <secmodel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model>selinux</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <doi>0</doi>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </secmodel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <secmodel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model>dac</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <doi>0</doi>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </secmodel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </host>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <guest>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <os_type>hvm</os_type>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <arch name='i686'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <wordsize>32</wordsize>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <domain type='qemu'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <domain type='kvm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <pae/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <nonpae/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <acpi default='on' toggle='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <apic default='on' toggle='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <cpuselection/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <deviceboot/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <disksnapshot default='on' toggle='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <externalSnapshot/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </guest>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <guest>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <os_type>hvm</os_type>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <arch name='x86_64'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <wordsize>64</wordsize>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <domain type='qemu'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <domain type='kvm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <acpi default='on' toggle='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <apic default='on' toggle='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <cpuselection/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <deviceboot/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <disksnapshot default='on' toggle='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <externalSnapshot/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </guest>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 
Nov 24 04:47:09 np0005533252 nova_compute[230010]: </capabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: #033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.493 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.495 230014 DEBUG nova.virt.libvirt.volume.mount [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.498 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 04:47:09 np0005533252 nova_compute[230010]: <domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <domain>kvm</domain>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <arch>i686</arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <vcpu max='4096'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <iothreads supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <os supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='firmware'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <loader supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>rom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pflash</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='readonly'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>yes</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='secure'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </loader>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='maximumMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <vendor>AMD</vendor>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='succor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='custom' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-128'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-256'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-512'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <memoryBacking supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='sourceType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>anonymous</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>memfd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </memoryBacking>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <disk supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='diskDevice'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>disk</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cdrom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>floppy</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>lun</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>fdc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>sata</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <graphics supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vnc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egl-headless</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <video supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='modelType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vga</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cirrus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>none</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>bochs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ramfb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hostdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='mode'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>subsystem</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='startupPolicy'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>mandatory</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>requisite</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>optional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='subsysType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pci</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='capsType'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='pciBackend'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hostdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <rng supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>random</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <filesystem supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='driverType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>path</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>handle</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtiofs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </filesystem>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <tpm supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-tis</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-crb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emulator</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>external</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendVersion'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>2.0</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </tpm>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <redirdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </redirdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <channel supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </channel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <crypto supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </crypto>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <interface supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>passt</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <panic supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>isa</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>hyperv</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </panic>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <console supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>null</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dev</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pipe</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stdio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>udp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tcp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu-vdagent</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <gic supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <genid supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backup supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <async-teardown supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <ps2 supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sev supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sgx supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hyperv supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='features'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>relaxed</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vapic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>spinlocks</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vpindex</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>runtime</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>synic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stimer</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reset</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vendor_id</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>frequencies</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reenlightenment</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tlbflush</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ipi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>avic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emsr_bitmap</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>xmm_input</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hyperv>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <launchSecurity supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='sectype'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tdx</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </launchSecurity>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: </domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.504 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 04:47:09 np0005533252 nova_compute[230010]: <domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <domain>kvm</domain>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <arch>i686</arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <vcpu max='240'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <iothreads supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <os supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='firmware'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <loader supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>rom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pflash</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='readonly'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>yes</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='secure'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </loader>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='maximumMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <vendor>AMD</vendor>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='succor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='custom' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-128'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-256'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-512'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <memoryBacking supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='sourceType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>anonymous</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>memfd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </memoryBacking>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <disk supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='diskDevice'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>disk</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cdrom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>floppy</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>lun</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ide</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>fdc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>sata</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <graphics supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vnc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egl-headless</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <video supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='modelType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vga</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cirrus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>none</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>bochs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ramfb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hostdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='mode'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>subsystem</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='startupPolicy'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>mandatory</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>requisite</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>optional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='subsysType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pci</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='capsType'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='pciBackend'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hostdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <rng supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>random</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <filesystem supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='driverType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>path</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>handle</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtiofs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </filesystem>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <tpm supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-tis</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-crb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emulator</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>external</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendVersion'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>2.0</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </tpm>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <redirdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </redirdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <channel supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </channel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <crypto supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </crypto>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <interface supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>passt</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <panic supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>isa</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>hyperv</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </panic>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <console supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>null</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dev</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pipe</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stdio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>udp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tcp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu-vdagent</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <gic supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <genid supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backup supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <async-teardown supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <ps2 supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sev supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sgx supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hyperv supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='features'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>relaxed</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vapic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>spinlocks</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vpindex</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>runtime</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>synic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stimer</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reset</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vendor_id</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>frequencies</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reenlightenment</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tlbflush</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ipi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>avic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emsr_bitmap</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>xmm_input</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hyperv>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <launchSecurity supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='sectype'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tdx</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </launchSecurity>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: </domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.529 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.533 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 04:47:09 np0005533252 nova_compute[230010]: <domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <domain>kvm</domain>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <arch>x86_64</arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <vcpu max='4096'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <iothreads supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <os supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='firmware'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>efi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <loader supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>rom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pflash</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='readonly'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>yes</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='secure'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>yes</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </loader>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='maximumMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <vendor>AMD</vendor>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='succor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='custom' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-128'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-256'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-512'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <memoryBacking supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='sourceType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>anonymous</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>memfd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </memoryBacking>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <disk supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='diskDevice'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>disk</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cdrom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>floppy</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>lun</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>fdc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>sata</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <graphics supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vnc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egl-headless</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <video supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='modelType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vga</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cirrus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>none</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>bochs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ramfb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hostdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='mode'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>subsystem</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='startupPolicy'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>mandatory</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>requisite</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>optional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='subsysType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pci</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='capsType'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='pciBackend'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hostdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <rng supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>random</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <filesystem supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='driverType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>path</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>handle</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtiofs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </filesystem>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <tpm supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-tis</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-crb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emulator</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>external</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendVersion'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>2.0</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </tpm>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <redirdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </redirdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <channel supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </channel>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <crypto supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </crypto>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <interface supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>passt</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <panic supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>isa</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>hyperv</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </panic>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <console supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>null</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pty</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dev</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pipe</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stdio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>udp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tcp</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>unix</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>qemu-vdagent</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <gic supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <vmcoreinfo supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <genid supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backingStoreInput supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <backup supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <async-teardown supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <ps2 supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sev supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <sgx supported='no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hyperv supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='features'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>relaxed</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vapic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>spinlocks</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vpindex</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>runtime</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>synic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>stimer</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reset</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vendor_id</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>frequencies</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>reenlightenment</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tlbflush</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ipi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>avic</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emsr_bitmap</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>xmm_input</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <spinlocks>4095</spinlocks>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <stimer_direct>on</stimer_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </defaults>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hyperv>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <launchSecurity supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='sectype'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tdx</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </launchSecurity>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: </domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 04:47:09 np0005533252 nova_compute[230010]: 2025-11-24 09:47:09.588 230014 DEBUG nova.virt.libvirt.host [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 04:47:09 np0005533252 nova_compute[230010]: <domainCapabilities>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <domain>kvm</domain>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <arch>x86_64</arch>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <vcpu max='240'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <iothreads supported='yes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <os supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='firmware'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <loader supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>rom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pflash</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='readonly'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>yes</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='secure'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>no</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </loader>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-passthrough' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='hostPassthroughMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='maximum' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='maximumMigratable'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>on</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>off</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='host-model' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <vendor>AMD</vendor>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='x2apic'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='hypervisor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='stibp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='overflow-recov'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='succor'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lbrv'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='tsc-scale'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='flushbyasid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pause-filter'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='pfthreshold'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <feature policy='disable' name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <mode name='custom' supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Broadwell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Cooperlake-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Denverton-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Dhyana-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='auto-ibrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Milan-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amd-psfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='no-nested-data-bp'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='null-sel-clr-base'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='stibp-always-on'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-Rome-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='EPYC-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='GraniteRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-128'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-256'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx10-512'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='prefetchiti'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Haswell-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v6'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Icelake-Server-v7'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='IvyBridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='KnightsMill-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4fmaps'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-4vnniw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512er'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512pf'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G4-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Opteron_G5-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fma4'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tbm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xop'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SapphireRapids-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='amx-tile'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-bf16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-fp16'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512-vpopcntdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bitalg'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vbmi2'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrc'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fzrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='la57'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='taa-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='tsx-ldtrk'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xfd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='SierraForest-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ifma'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-ne-convert'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx-vnni-int8'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='bus-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cmpccxadd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fbsdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='fsrs'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ibrs-all'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mcdt-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pbrsb-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='psdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='sbdr-ssdp-no'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='serialize'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vaes'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='vpclmulqdq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Client-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='hle'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='rtm'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Skylake-Server-v5'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512bw'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512cd'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512dq'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512f'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='avx512vl'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='invpcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pcid'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='pku'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='mpx'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v2'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v3'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='core-capability'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='split-lock-detect'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='Snowridge-v4'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='cldemote'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='erms'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='gfni'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdir64b'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='movdiri'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='xsaves'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='athlon-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='core2duo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='coreduo-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='n270-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='ss'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <blockers model='phenom-v1'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnow'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <feature name='3dnowext'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </blockers>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </mode>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <memoryBacking supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <enum name='sourceType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>file</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>anonymous</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <value>memfd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  </memoryBacking>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <disk supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='diskDevice'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>disk</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cdrom</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>floppy</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>lun</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='bus'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ide</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>fdc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>sata</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <graphics supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='type'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vnc</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egl-headless</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>dbus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <video supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='modelType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>vga</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>cirrus</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>none</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>bochs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>ramfb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <hostdev supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='mode'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>subsystem</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='startupPolicy'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>default</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>mandatory</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>requisite</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>optional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='subsysType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>usb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>pci</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>scsi</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='capsType'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='pciBackend'/>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </hostdev>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <rng supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtio-non-transitional</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>random</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>egd</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>builtin</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <filesystem supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='driverType'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>path</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>handle</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>virtiofs</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    </filesystem>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:    <tpm supported='yes'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='model'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-tis</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>tpm-crb</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      </enum>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:      <enum name='backendModel'>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>emulator</value>
Nov 24 04:47:09 np0005533252 nova_compute[230010]:        <value>external</value>
Nov 24 04:49:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:50 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:51 np0005533252 rsyslogd[1005]: imjournal: 1384 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 04:49:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:51 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:51 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:52 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:49:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:52.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:49:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:52 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/094953 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:49:53 np0005533252 podman[231964]: 2025-11-24 09:49:53.314216984 +0000 UTC m=+0.057951715 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 04:49:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:53 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:53 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:54 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:49:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:54 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:55 np0005533252 podman[231985]: 2025-11-24 09:49:55.367757953 +0000 UTC m=+0.102758427 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 04:49:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:49:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:49:55 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:55 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:56 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:56 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:49:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:49:57 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:57 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:58 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:58.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:58 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:49:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:49:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:49:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:49:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:49:59 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:49:59 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:00 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: overall HEALTH_OK
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.610446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800610488, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2353, "num_deletes": 251, "total_data_size": 6357529, "memory_usage": 6430848, "flush_reason": "Manual Compaction"}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800630100, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4163317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20835, "largest_seqno": 23183, "table_properties": {"data_size": 4153694, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19510, "raw_average_key_size": 20, "raw_value_size": 4134666, "raw_average_value_size": 4284, "num_data_blocks": 268, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977576, "oldest_key_time": 1763977576, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 19733 microseconds, and 9584 cpu microseconds.
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630180) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4163317 bytes OK
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630209) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631883) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631913) EVENT_LOG_v1 {"time_micros": 1763977800631904, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.631940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6347095, prev total WAL file size 6347095, number of live WAL files 2.
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.634731) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4065KB)], [39(12MB)]
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800634779, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17118378, "oldest_snapshot_seqno": -1}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5452 keys, 14940846 bytes, temperature: kUnknown
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800715795, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14940846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14902070, "index_size": 24074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137561, "raw_average_key_size": 25, "raw_value_size": 14801259, "raw_average_value_size": 2714, "num_data_blocks": 993, "num_entries": 5452, "num_filter_entries": 5452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.716178) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14940846 bytes
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.0 rd, 184.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 5972, records dropped: 520 output_compression: NoCompression
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717651) EVENT_LOG_v1 {"time_micros": 1763977800717637, "job": 22, "event": "compaction_finished", "compaction_time_micros": 81115, "compaction_time_cpu_micros": 55626, "output_level": 6, "num_output_files": 1, "total_output_size": 14940846, "num_input_records": 5972, "num_output_records": 5452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800719248, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800724096, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.634628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:50:00.724198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:50:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:00 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:01 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:01 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:02 np0005533252 podman[232017]: 2025-11-24 09:50:02.328173993 +0000 UTC m=+0.072234366 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:50:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:50:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:02 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:03 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:03 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:04 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:04.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:04 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:50:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:50:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:05 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:06 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:06 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:07 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:50:08 np0005533252 nova_compute[230010]: 2025-11-24 09:50:08.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:08 np0005533252 nova_compute[230010]: 2025-11-24 09:50:08.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:08 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f40044d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:09 np0005533252 nova_compute[230010]: 2025-11-24 09:50:09.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:09 np0005533252 nova_compute[230010]: 2025-11-24 09:50:09.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:09 np0005533252 nova_compute[230010]: 2025-11-24 09:50:09.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:50:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.795 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:50:10 np0005533252 nova_compute[230010]: 2025-11-24 09:50:10.796 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:50:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:10 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:50:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497774359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.254 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.402 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.403 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5248MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.470 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.485 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:50:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:50:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2769118241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.912 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.917 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.941 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.943 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:50:11 np0005533252 nova_compute[230010]: 2025-11-24 09:50:11.943 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:50:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4080014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:12.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:12 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe4000016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:12 np0005533252 nova_compute[230010]: 2025-11-24 09:50:12.938 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:12 np0005533252 nova_compute[230010]: 2025-11-24 09:50:12.939 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:50:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:13.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:14 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:15 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095015 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:50:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:50:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:50:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:50:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:50:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f00036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:16.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:16 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:18 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f0003700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:50:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:50:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:50:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:20 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3f0003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:22 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe408003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:24 np0005533252 kernel: ganesha.nfsd[231924]: segfault at 50 ip 00007fe4c0ab232e sp 00007fe489ffa210 error 4 in libntirpc.so.5.8[7fe4c0a97000+2c000] likely on CPU 0 (core 0, socket 0)
Nov 24 04:50:24 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:50:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[231871]: 24/11/2025 09:50:24 : epoch 69242a2b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe400003780 fd 38 proxy ignored for local
Nov 24 04:50:24 np0005533252 systemd[1]: Started Process Core Dump (PID 232117/UID 0).
Nov 24 04:50:24 np0005533252 podman[232118]: 2025-11-24 09:50:24.105592889 +0000 UTC m=+0.060019101 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 04:50:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:24.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:25 np0005533252 systemd-coredump[232119]: Process 231875 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007fe4c0ab232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:50:25 np0005533252 systemd[1]: systemd-coredump@13-232117-0.service: Deactivated successfully.
Nov 24 04:50:25 np0005533252 systemd[1]: systemd-coredump@13-232117-0.service: Consumed 1.179s CPU time.
Nov 24 04:50:25 np0005533252 podman[232169]: 2025-11-24 09:50:25.315969601 +0000 UTC m=+0.025578297 container died 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:50:25 np0005533252 systemd[1]: var-lib-containers-storage-overlay-7965caac272a713787c8d26f8d5128ae9091a3b21b0f38c635c05f705b356082-merged.mount: Deactivated successfully.
Nov 24 04:50:25 np0005533252 podman[232169]: 2025-11-24 09:50:25.351020419 +0000 UTC m=+0.060629085 container remove 1da4f2ab964ee7c0bfd002d80d7b0b1988bc7d8f299c0f28278aaaeea008eea6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 04:50:25 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:50:25 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:50:25 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.396s CPU time.
Nov 24 04:50:25 np0005533252 podman[232210]: 2025-11-24 09:50:25.598282783 +0000 UTC m=+0.081127647 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 04:50:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:50:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:50:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:50:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:30 np0005533252 podman[232358]: 2025-11-24 09:50:30.26845105 +0000 UTC m=+0.089274466 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:30 np0005533252 podman[232358]: 2025-11-24 09:50:30.38277973 +0000 UTC m=+0.203603136 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:50:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:50:31 np0005533252 podman[232495]: 2025-11-24 09:50:31.048767415 +0000 UTC m=+0.129134402 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:50:31 np0005533252 podman[232520]: 2025-11-24 09:50:31.171810668 +0000 UTC m=+0.108026776 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:50:31 np0005533252 podman[232495]: 2025-11-24 09:50:31.188722782 +0000 UTC m=+0.269089799 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 04:50:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:31.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:31 np0005533252 podman[232614]: 2025-11-24 09:50:31.712961566 +0000 UTC m=+0.054934626 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:50:31 np0005533252 podman[232614]: 2025-11-24 09:50:31.740726536 +0000 UTC m=+0.082699596 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 04:50:31 np0005533252 podman[232682]: 2025-11-24 09:50:31.961277996 +0000 UTC m=+0.059389116 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, version=2.2.4, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Nov 24 04:50:32 np0005533252 podman[232682]: 2025-11-24 09:50:32.017875711 +0000 UTC m=+0.115986831 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:50:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 04:50:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:50:33 np0005533252 podman[232796]: 2025-11-24 09:50:33.334918316 +0000 UTC m=+0.067293368 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:50:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:50:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:33.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:50:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:35 np0005533252 ceph-mon[80009]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 24 04:50:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:35 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 14.
Nov 24 04:50:35 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:50:35 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.396s CPU time.
Nov 24 04:50:35 np0005533252 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 04:50:35 np0005533252 podman[232866]: 2025-11-24 09:50:35.947349724 +0000 UTC m=+0.041367493 container create 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:50:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 04:50:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:50:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:50:35 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vvoanr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 04:50:36 np0005533252 podman[232866]: 2025-11-24 09:50:36.00230481 +0000 UTC m=+0.096322609 container init 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:50:36 np0005533252 podman[232866]: 2025-11-24 09:50:36.010228663 +0000 UTC m=+0.104246432 container start 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 04:50:36 np0005533252 bash[232866]: 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e
Nov 24 04:50:36 np0005533252 podman[232866]: 2025-11-24 09:50:35.931442815 +0000 UTC m=+0.025460604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:50:36 np0005533252 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 04:50:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 04:50:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:50:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:50:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:38 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:38 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:40.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 04:50:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 04:50:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:42.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:44.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 04:50:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:50:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:50:45 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:50:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:47.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:48.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 04:50:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89dc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:49.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095052 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 04:50:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c80014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:52.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c80014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:53.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:54 np0005533252 podman[232997]: 2025-11-24 09:50:54.348442145 +0000 UTC m=+0.086589741 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd)
Nov 24 04:50:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:56 np0005533252 podman[233019]: 2025-11-24 09:50:56.326642026 +0000 UTC m=+0.071054411 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 04:50:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:57 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.358 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:50:57 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.359 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:50:57 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:50:57.359 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:50:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:50:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:57.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:50:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:50:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:50:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:50:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:50:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:50:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:50:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:00.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:51:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8002460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 04:51:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 04:51:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:01.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:03.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:04 np0005533252 podman[233049]: 2025-11-24 09:51:04.339648513 +0000 UTC m=+0.074296430 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 04:51:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:04 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:04 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:05.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:06 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:06 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:07.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:08.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:08 np0005533252 nova_compute[230010]: 2025-11-24 09:51:08.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:08 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.776 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:10 np0005533252 nova_compute[230010]: 2025-11-24 09:51:10.777 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:51:10 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:10 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:51:11 np0005533252 nova_compute[230010]: 2025-11-24 09:51:11.789 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:51:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:51:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2281961602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:51:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:51:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/423930642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.225 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:51:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:12.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.368 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.369 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.425 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.426 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.457 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:51:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:51:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4230280917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.897 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.902 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.914 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.915 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:51:12 np0005533252 nova_compute[230010]: 2025-11-24 09:51:12.916 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:51:12 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:12 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:13.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:13 np0005533252 nova_compute[230010]: 2025-11-24 09:51:13.910 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:13 np0005533252 nova_compute[230010]: 2025-11-24 09:51:13.910 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:13 np0005533252 nova_compute[230010]: 2025-11-24 09:51:13.928 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:51:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:14 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:14 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:51:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:51:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:16 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:16 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89bc0027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:18 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:18 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:19.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.051 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:51:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.052 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:51:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:51:20.052 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:51:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:20 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:20 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:21.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:22 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:22 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:24 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:24 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:25 np0005533252 podman[233173]: 2025-11-24 09:51:25.141798692 +0000 UTC m=+0.050523358 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:51:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:26 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:26 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:27 np0005533252 podman[233196]: 2025-11-24 09:51:27.337009987 +0000 UTC m=+0.077624162 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 04:51:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:28 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:28 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:51:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:51:30 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:30 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d40022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:32 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:32 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:34.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:34 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:34 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:35 np0005533252 podman[233228]: 2025-11-24 09:51:35.327323468 +0000 UTC m=+0.062586133 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 04:51:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:51:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:51:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:36 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:36 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:51:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:51:38 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:38 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:51:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:51:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:51:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:51:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:40 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:40 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:51:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:51:42 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:42 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.530884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903531212, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1340, "num_deletes": 252, "total_data_size": 3199155, "memory_usage": 3247032, "flush_reason": "Manual Compaction"}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903542001, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1342962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23188, "largest_seqno": 24523, "table_properties": {"data_size": 1338313, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12168, "raw_average_key_size": 20, "raw_value_size": 1328214, "raw_average_value_size": 2251, "num_data_blocks": 88, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977801, "oldest_key_time": 1763977801, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 11152 microseconds, and 5665 cpu microseconds.
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542046) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1342962 bytes OK
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542065) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543512) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543532) EVENT_LOG_v1 {"time_micros": 1763977903543526, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.543551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3192751, prev total WAL file size 3192751, number of live WAL files 2.
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.544601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1311KB)], [42(14MB)]
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903544647, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16283808, "oldest_snapshot_seqno": -1}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5565 keys, 12889673 bytes, temperature: kUnknown
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903616703, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12889673, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12853364, "index_size": 21287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140281, "raw_average_key_size": 25, "raw_value_size": 12753791, "raw_average_value_size": 2291, "num_data_blocks": 870, "num_entries": 5565, "num_filter_entries": 5565, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.616974) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12889673 bytes
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.7 rd, 178.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.2 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.7) write-amplify(9.6) OK, records in: 6042, records dropped: 477 output_compression: NoCompression
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618201) EVENT_LOG_v1 {"time_micros": 1763977903618188, "job": 24, "event": "compaction_finished", "compaction_time_micros": 72141, "compaction_time_cpu_micros": 25000, "output_level": 6, "num_output_files": 1, "total_output_size": 12889673, "num_input_records": 6042, "num_output_records": 5565, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903618588, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903621209, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.544495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:51:43.621510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:51:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:43.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:51:43 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:51:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:44 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:44 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:51:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:51:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89b0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:46 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:46 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:47.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c40008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:48 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:48 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:49.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:51:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:51:50 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:50 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c4001a40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:51.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:52 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:52 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:54 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:54 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:55 np0005533252 podman[233390]: 2025-11-24 09:51:55.315152789 +0000 UTC m=+0.055182661 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 24 04:51:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:55.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c4002360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:51:56 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:56 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:51:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:57.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:51:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:51:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:58.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:51:58 np0005533252 podman[233413]: 2025-11-24 09:51:58.381354368 +0000 UTC m=+0.122864000 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 04:51:58 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:51:58 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:51:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:51:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:51:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:51:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:59.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89c8004570 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:52:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89d4004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:52:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 04:52:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 04:52:00 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:00 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 04:52:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:02 np0005533252 kernel: ganesha.nfsd[232992]: segfault at 50 ip 00007f8a883f532e sp 00007f8a48ff8210 error 4 in libntirpc.so.5.8[7f8a883da000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 04:52:02 np0005533252 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 04:52:02 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr[232881]: 24/11/2025 09:52:02 : epoch 69242a6c : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ac004390 fd 38 proxy ignored for local
Nov 24 04:52:02 np0005533252 systemd[1]: Started Process Core Dump (PID 233443/UID 0).
Nov 24 04:52:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:02.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:03 np0005533252 systemd-coredump[233444]: Process 232885 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f8a883f532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 24 04:52:03 np0005533252 systemd[1]: systemd-coredump@14-233443-0.service: Deactivated successfully.
Nov 24 04:52:03 np0005533252 systemd[1]: systemd-coredump@14-233443-0.service: Consumed 1.154s CPU time.
Nov 24 04:52:03 np0005533252 podman[233449]: 2025-11-24 09:52:03.369130361 +0000 UTC m=+0.036435723 container died 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 04:52:03 np0005533252 systemd[1]: var-lib-containers-storage-overlay-9523fca06ca10bbbcc0c351e718a2551cbafcef26024bd43b47a36575b91d90d-merged.mount: Deactivated successfully.
Nov 24 04:52:03 np0005533252 podman[233449]: 2025-11-24 09:52:03.428252628 +0000 UTC m=+0.095557940 container remove 1503b1868ffcd35020bab5465b223d2cec50eac84dbca532a4d028d35a74126e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-0-0-compute-1-vvoanr, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:52:03 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Main process exited, code=exited, status=139/n/a
Nov 24 04:52:03 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:52:03 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.513s CPU time.
Nov 24 04:52:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:05 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095205 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:52:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:05.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:06 np0005533252 podman[233518]: 2025-11-24 09:52:06.319587657 +0000 UTC m=+0.053238965 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 04:52:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:07.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:08 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095208 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:52:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:08.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.786 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.787 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.787 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 04:52:08 np0005533252 nova_compute[230010]: 2025-11-24 09:52:08.797 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:09.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:10.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:10 np0005533252 nova_compute[230010]: 2025-11-24 09:52:10.868 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:11.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.804 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:52:11 np0005533252 nova_compute[230010]: 2025-11-24 09:52:11.805 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:52:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:52:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4225259137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.256 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:52:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:12.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.395 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.397 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5284MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.397 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.398 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.586 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.586 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.645 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.728 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.728 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.746 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.767 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 04:52:12 np0005533252 nova_compute[230010]: 2025-11-24 09:52:12.782 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:52:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:52:13 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/405630170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:52:13 np0005533252 nova_compute[230010]: 2025-11-24 09:52:13.190 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:52:13 np0005533252 nova_compute[230010]: 2025-11-24 09:52:13.195 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:52:13 np0005533252 nova_compute[230010]: 2025-11-24 09:52:13.209 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:52:13 np0005533252 nova_compute[230010]: 2025-11-24 09:52:13.211 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:52:13 np0005533252 nova_compute[230010]: 2025-11-24 09:52:13.211 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:52:13 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Scheduled restart job, restart counter is at 15.
Nov 24 04:52:13 np0005533252 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:52:13 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Consumed 1.513s CPU time.
Nov 24 04:52:13 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Start request repeated too quickly.
Nov 24 04:52:13 np0005533252 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.0.0.compute-1.vvoanr.service: Failed with result 'exit-code'.
Nov 24 04:52:13 np0005533252 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.vvoanr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 04:52:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:13.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.211 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.230 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.231 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.231 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:14 np0005533252 nova_compute[230010]: 2025-11-24 09:52:14.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:52:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:52:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:15 np0005533252 nova_compute[230010]: 2025-11-24 09:52:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:52:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:18.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.544311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938544375, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 255, "total_data_size": 918917, "memory_usage": 931112, "flush_reason": "Manual Compaction"}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938549167, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 604618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24528, "largest_seqno": 25101, "table_properties": {"data_size": 601700, "index_size": 890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6559, "raw_average_key_size": 17, "raw_value_size": 595842, "raw_average_value_size": 1606, "num_data_blocks": 40, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977903, "oldest_key_time": 1763977903, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4885 microseconds, and 2534 cpu microseconds.
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.549206) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 604618 bytes OK
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.549222) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550469) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550484) EVENT_LOG_v1 {"time_micros": 1763977938550481, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 915588, prev total WAL file size 915588, number of live WAL files 2.
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(590KB)], [45(12MB)]
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938550984, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13494291, "oldest_snapshot_seqno": -1}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5418 keys, 13358487 bytes, temperature: kUnknown
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938621612, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13358487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13322263, "index_size": 21586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138413, "raw_average_key_size": 25, "raw_value_size": 13224294, "raw_average_value_size": 2440, "num_data_blocks": 879, "num_entries": 5418, "num_filter_entries": 5418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.621869) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13358487 bytes
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.623161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.8 rd, 188.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.3 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(44.4) write-amplify(22.1) OK, records in: 5936, records dropped: 518 output_compression: NoCompression
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.623179) EVENT_LOG_v1 {"time_micros": 1763977938623172, "job": 26, "event": "compaction_finished", "compaction_time_micros": 70714, "compaction_time_cpu_micros": 24236, "output_level": 6, "num_output_files": 1, "total_output_size": 13358487, "num_input_records": 5936, "num_output_records": 5418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938623361, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938625706, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.550866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:18 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:52:18.625832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:52:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:52:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:52:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:52:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:52:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:22.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:24.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:25 np0005533252 podman[233614]: 2025-11-24 09:52:25.474148869 +0000 UTC m=+0.102051677 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 04:52:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:26.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:27.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:28.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:29 np0005533252 podman[233637]: 2025-11-24 09:52:29.342023754 +0000 UTC m=+0.082322383 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 24 04:52:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:30.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:52:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:52:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:31.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - - [24/Nov/2025:09:52:33.805 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.001000024s
Nov 24 04:52:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:37 np0005533252 podman[233667]: 2025-11-24 09:52:37.318251786 +0000 UTC m=+0.049556202 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 24 04:52:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:37.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:38.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 04:52:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 04:52:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 04:52:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:41.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 04:52:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [WARNING] 327/095243 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 04:52:43 np0005533252 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy[85975]: [ALERT] 327/095243 (4) : backend 'backend' has no server available!
Nov 24 04:52:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:43.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:52:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:52:44 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:52:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:52:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:52:44 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:52:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:52:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:52:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:45.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:47.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:52:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:52:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:52:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:52:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:49.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:51.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:52:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:52:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:52:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:53.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:52:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:55.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:56 np0005533252 podman[233828]: 2025-11-24 09:52:56.345026499 +0000 UTC m=+0.084675841 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 04:52:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:57.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:52:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:52:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:52:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:52:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:59.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:00 np0005533252 podman[233850]: 2025-11-24 09:53:00.409198605 +0000 UTC m=+0.148538493 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:53:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:53:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:53:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:53:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:01.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:53:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:05 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:05.779 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:53:05 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:05.780 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:53:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:05.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:07.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:08 np0005533252 podman[233905]: 2025-11-24 09:53:08.342434574 +0000 UTC m=+0.083543584 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 24 04:53:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:09.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:53:11 np0005533252 nova_compute[230010]: 2025-11-24 09:53:11.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:53:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:11.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:53:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2058198224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.227 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.389 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.390 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5299MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.390 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.391 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:53:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.445 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.446 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.468 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:53:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:12.782 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:53:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:53:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4266170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.906 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.912 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.928 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.930 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:53:12 np0005533252 nova_compute[230010]: 2025-11-24 09:53:12.930 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:53:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:13.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:13 np0005533252 nova_compute[230010]: 2025-11-24 09:53:13.931 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:13 np0005533252 nova_compute[230010]: 2025-11-24 09:53:13.931 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:13 np0005533252 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:13 np0005533252 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:13 np0005533252 nova_compute[230010]: 2025-11-24 09:53:13.932 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:53:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:14 np0005533252 nova_compute[230010]: 2025-11-24 09:53:14.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:14 np0005533252 nova_compute[230010]: 2025-11-24 09:53:14.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:53:14 np0005533252 nova_compute[230010]: 2025-11-24 09:53:14.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:53:14 np0005533252 nova_compute[230010]: 2025-11-24 09:53:14.784 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:53:14 np0005533252 nova_compute[230010]: 2025-11-24 09:53:14.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:53:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:53:15 np0005533252 nova_compute[230010]: 2025-11-24 09:53:15.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:16 np0005533252 nova_compute[230010]: 2025-11-24 09:53:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:17 np0005533252 nova_compute[230010]: 2025-11-24 09:53:17.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:53:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:18.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:19.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.053 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:53:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:53:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:53:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:53:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:23.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.041759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006041853, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 251, "total_data_size": 2118719, "memory_usage": 2158160, "flush_reason": "Manual Compaction"}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006052102, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1394668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25106, "largest_seqno": 26064, "table_properties": {"data_size": 1390172, "index_size": 2148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9990, "raw_average_key_size": 19, "raw_value_size": 1381042, "raw_average_value_size": 2745, "num_data_blocks": 95, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977939, "oldest_key_time": 1763977939, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10445 microseconds, and 5686 cpu microseconds.
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052218) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1394668 bytes OK
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052262) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053593) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053609) EVENT_LOG_v1 {"time_micros": 1763978006053605, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053627) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2113863, prev total WAL file size 2113863, number of live WAL files 2.
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.054669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1361KB)], [48(12MB)]
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006054761, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14753155, "oldest_snapshot_seqno": -1}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5401 keys, 12547278 bytes, temperature: kUnknown
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006127322, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12547278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12511769, "index_size": 20935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138783, "raw_average_key_size": 25, "raw_value_size": 12414574, "raw_average_value_size": 2298, "num_data_blocks": 848, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.127836) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12547278 bytes
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.138084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 172.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(19.6) write-amplify(9.0) OK, records in: 5921, records dropped: 520 output_compression: NoCompression
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.138137) EVENT_LOG_v1 {"time_micros": 1763978006138117, "job": 28, "event": "compaction_finished", "compaction_time_micros": 72753, "compaction_time_cpu_micros": 40477, "output_level": 6, "num_output_files": 1, "total_output_size": 12547278, "num_input_records": 5921, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006138837, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006142817, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.054494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:53:26.142886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:53:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:27 np0005533252 podman[234002]: 2025-11-24 09:53:27.317501114 +0000 UTC m=+0.061181936 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 24 04:53:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:28.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 04:53:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:53:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:53:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 04:53:31 np0005533252 podman[234024]: 2025-11-24 09:53:31.367890601 +0000 UTC m=+0.098670763 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 04:53:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 04:53:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:35.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:39 np0005533252 podman[234056]: 2025-11-24 09:53:39.306281837 +0000 UTC m=+0.051601763 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:53:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:53:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:53:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:53:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:53:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:53:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:53:50 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 04:53:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:53:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:53:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:53:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:52 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:53:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:53.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:53:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:53:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:57 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:57 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:53:58 np0005533252 podman[234215]: 2025-11-24 09:53:58.326574732 +0000 UTC m=+0.062639352 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:53:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:53:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:53:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:53:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:53:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:59.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:54:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:54:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:02 np0005533252 podman[234236]: 2025-11-24 09:54:02.341137235 +0000 UTC m=+0.086124826 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 04:54:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:03.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:05.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:06 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:06.405 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:54:06 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:06.406 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:54:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:06.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:07.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:08.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:10 np0005533252 podman[234291]: 2025-11-24 09:54:10.324367135 +0000 UTC m=+0.057536918 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 24 04:54:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:10.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:11.407 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:11.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:12.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.801 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.802 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.813 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.891 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.892 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.899 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.899 230014 INFO nova.compute.claims [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 04:54:12 np0005533252 nova_compute[230010]: 2025-11-24 09:54:12.993 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:13 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1658824246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.413 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.421 230014 DEBUG nova.compute.provider_tree [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.440 230014 DEBUG nova.scheduler.client.report [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.504 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.504 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.559 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.559 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.584 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.599 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.689 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.690 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.691 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating image(s)#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.721 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.752 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.779 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.781 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.782 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.784 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.785 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.799 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.800 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:54:13 np0005533252 nova_compute[230010]: 2025-11-24 09:54:13.800 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:13.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.104 230014 DEBUG nova.virt.libvirt.imagebackend [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image locations are: [{'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 24 04:54:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4102294369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.332 230014 WARNING oslo_policy.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.333 230014 WARNING oslo_policy.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.335 230014 DEBUG nova.policy [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.377 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5257MB free_disk=59.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.378 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 4313a8bf-5a2a-4de5-84e7-ead18a049c18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.479 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 04:54:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 04:54:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3060430779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.934 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.953 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.961 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.996 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.997 230014 DEBUG nova.virt.images [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] 6ef14bdf-4f04-4400-8040-4409d9d5271e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.999 230014 DEBUG nova.privsep.utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 24 04:54:14 np0005533252 nova_compute[230010]: 2025-11-24 09:54:14.999 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.014 230014 ERROR nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [req-9793cf5d-762b-438f-baff-1525d77653cb] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1b7b0f22-dba8-42a8-9de3-763c9152946e.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-9793cf5d-762b-438f-baff-1525d77653cb"}]}#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.030 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.048 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.048 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.061 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.080 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.112 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.151 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.156 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.209 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.210 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.240 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.246 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.369 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Successfully created port: 31962c69-e86c-4431-b40a-e84cb6d9b71d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:54:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:54:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.508 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2342644203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.570 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.576 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.605 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.667 230014 DEBUG nova.objects.instance [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updated inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.669 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.688 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Ensure instance console log exists: /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.689 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.690 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.750 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:54:15 np0005533252 nova_compute[230010]: 2025-11-24 09:54:15.750 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:16.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.690 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Successfully updated port: 31962c69-e86c-4431-b40a-e84cb6d9b71d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.702 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.703 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.703 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.730 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.731 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.731 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.751 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.751 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.752 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG nova.compute.manager [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-changed-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG nova.compute.manager [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Refreshing instance network info cache due to event network-changed-31962c69-e86c-4431-b40a-e84cb6d9b71d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.785 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:54:16 np0005533252 nova_compute[230010]: 2025-11-24 09:54:16.844 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.718 230014 DEBUG nova.network.neutron [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance network_info: |[{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.740 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.741 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Refreshing network info cache for port 31962c69-e86c-4431-b40a-e84cb6d9b71d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.743 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start _get_guest_xml network_info=[{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.747 230014 WARNING nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.753 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.753 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.759 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.760 230014 DEBUG nova.virt.libvirt.host [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.760 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.761 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.762 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.763 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.763 230014 DEBUG nova.virt.hardware [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.766 230014 DEBUG nova.privsep.utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 24 04:54:17 np0005533252 nova_compute[230010]: 2025-11-24 09:54:17.767 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:54:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2473323629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.232 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.258 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.261 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:54:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3807305221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.687 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.689 230014 DEBUG nova.virt.libvirt.vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:54:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.689 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.690 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.692 230014 DEBUG nova.objects.instance [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.705 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] End _get_guest_xml xml=<domain type="kvm">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <uuid>4313a8bf-5a2a-4de5-84e7-ead18a049c18</uuid>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <name>instance-00000002</name>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-904956127</nova:name>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 09:54:17</nova:creationTime>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <nova:port uuid="31962c69-e86c-4431-b40a-e84cb6d9b71d">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="serial">4313a8bf-5a2a-4de5-84e7-ead18a049c18</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="uuid">4313a8bf-5a2a-4de5-84e7-ead18a049c18</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:69:46:4d"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <target dev="tap31962c69-e8"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/console.log" append="off"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 04:54:18 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:54:18 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:54:18 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:54:18 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.706 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Preparing to wait for external event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.706 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.707 230014 DEBUG nova.virt.libvirt.vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:54:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.708 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.708 230014 DEBUG nova.network.os_vif_util [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.709 230014 DEBUG os_vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.745 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.745 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.746 230014 DEBUG ovsdbapp.backend.ovs_idl [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.746 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.750 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.752 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.763 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.764 230014 INFO oslo.privsep.daemon [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmplqc3st4j/privsep.sock']#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.808 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updated VIF entry in instance network info cache for port 31962c69-e86c-4431-b40a-e84cb6d9b71d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.809 230014 DEBUG nova.network.neutron [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [{"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:54:18 np0005533252 nova_compute[230010]: 2025-11-24 09:54:18.822 230014 DEBUG oslo_concurrency.lockutils [req-b779d75a-6f69-4bca-b923-6ca261b611d4 req-12557be7-8c18-407f-9a5f-78e530f41f52 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-4313a8bf-5a2a-4de5-84e7-ead18a049c18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:54:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.437 230014 INFO oslo.privsep.daemon [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.316 234647 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.323 234647 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.327 234647 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.327 234647 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234647#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.769 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31962c69-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.770 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31962c69-e8, col_values=(('external_ids', {'iface-id': '31962c69-e86c-4431-b40a-e84cb6d9b71d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:46:4d', 'vm-uuid': '4313a8bf-5a2a-4de5-84e7-ead18a049c18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:19 np0005533252 NetworkManager[48870]: <info>  [1763978059.7732] manager: (tap31962c69-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.773 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.779 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.780 230014 INFO os_vif [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8')#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.815 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:69:46:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.816 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Using config drive#033[00m
Nov 24 04:54:19 np0005533252 nova_compute[230010]: 2025-11-24 09:54:19.840 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.054 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.055 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.055 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.395 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Creating config drive at /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.405 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2m5d5ro execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:20.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.559 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv2m5d5ro" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.596 230014 DEBUG nova.storage.rbd_utils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.601 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.637 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.761 230014 DEBUG oslo_concurrency.processutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config 4313a8bf-5a2a-4de5-84e7-ead18a049c18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.763 230014 INFO nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deleting local config drive /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18/disk.config because it was imported into RBD.#033[00m
Nov 24 04:54:20 np0005533252 systemd[1]: Starting libvirt secret daemon...
Nov 24 04:54:20 np0005533252 systemd[1]: Started libvirt secret daemon.
Nov 24 04:54:20 np0005533252 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 24 04:54:20 np0005533252 kernel: tap31962c69-e8: entered promiscuous mode
Nov 24 04:54:20 np0005533252 NetworkManager[48870]: <info>  [1763978060.8765] manager: (tap31962c69-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 24 04:54:20 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:20Z|00027|binding|INFO|Claiming lport 31962c69-e86c-4431-b40a-e84cb6d9b71d for this chassis.
Nov 24 04:54:20 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:20Z|00028|binding|INFO|31962c69-e86c-4431-b40a-e84cb6d9b71d: Claiming fa:16:3e:69:46:4d 10.100.0.22
Nov 24 04:54:20 np0005533252 systemd-udevd[234744]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:54:20 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.921 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.932 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:46:4d 10.100.0.22'], port_security=['fa:16:3e:69:46:4d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '4313a8bf-5a2a-4de5-84e7-ead18a049c18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e927f01-795d-4fd1-bd00-bd898db487a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '841654bd-af9d-487b-9d46-e948edd0e4cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12eb72db-6a1a-4bb9-9912-1e510973ae62, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=31962c69-e86c-4431-b40a-e84cb6d9b71d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.933 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 31962c69-e86c-4431-b40a-e84cb6d9b71d in datapath 8e927f01-795d-4fd1-bd00-bd898db487a3 bound to our chassis#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.935 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e927f01-795d-4fd1-bd00-bd898db487a3#033[00m
Nov 24 04:54:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:20.936 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_ggsxi2p/privsep.sock']#033[00m
Nov 24 04:54:20 np0005533252 NetworkManager[48870]: <info>  [1763978060.9410] device (tap31962c69-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:54:20 np0005533252 NetworkManager[48870]: <info>  [1763978060.9422] device (tap31962c69-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:54:20 np0005533252 systemd-machined[193537]: New machine qemu-1-instance-00000002.
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:20.997 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:21 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:21Z|00029|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d ovn-installed in OVS
Nov 24 04:54:21 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:21Z|00030|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d up in Southbound
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.004 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:21 np0005533252 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.341 230014 DEBUG nova.compute.manager [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.342 230014 DEBUG oslo_concurrency.lockutils [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.343 230014 DEBUG nova.compute.manager [req-43f4ae09-12ec-4d99-aead-7815a76b3a18 req-e89f55c4-febb-46e2-8589-bb96bd760c8c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Processing event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.395 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.397 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.3949986, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.397 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Started (Lifecycle Event)#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.409 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.414 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.418 230014 INFO nova.virt.libvirt.driver [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance spawned successfully.#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.419 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.422 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.440 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.441 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.3962986, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.441 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Paused (Lifecycle Event)#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.446 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.447 230014 DEBUG nova.virt.libvirt.driver [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.468 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.472 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978061.399304, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.473 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Resumed (Lifecycle Event)#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.505 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.509 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.514 230014 INFO nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 7.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.515 230014 DEBUG nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.541 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.578 230014 INFO nova.compute.manager [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 8.72 seconds to build instance.#033[00m
Nov 24 04:54:21 np0005533252 nova_compute[230010]: 2025-11-24 09:54:21.591 230014 DEBUG oslo_concurrency.lockutils [None req-d449e86f-227d-4a7b-88d9-0f054f56c99b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.648 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.649 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_ggsxi2p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.480 234803 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.485 234803 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.487 234803 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.487 234803 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234803#033[00m
Nov 24 04:54:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:21.652 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b998d-7051-4c15-b96e-5369a3e55995]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:22 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.494 234803 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:22 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.495 234803 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:22 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:22.495 234803 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.383 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[84832ca2-1cd6-4918-a57c-fc019f314f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.384 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e927f01-71 in ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.386 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e927f01-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.387 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e8790529-9e2a-469b-9932-2b8676c57d67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.392 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80801f-b9b9-498b-80b8-371608f85b38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.414 230014 DEBUG oslo_concurrency.lockutils [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.415 230014 DEBUG nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:54:23 np0005533252 nova_compute[230010]: 2025-11-24 09:54:23.415 230014 WARNING nova.compute.manager [req-890fe122-4944-4f89-8a9b-9c557ba308d1 req-00ca6d6e-c7b2-459f-869d-56cd636f6465 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received unexpected event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with vm_state active and task_state None.#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.415 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[bae98a46-4829-43d1-9939-c333f24a8b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.442 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[fc81fa67-0d2d-45ab-b2ce-6188a26684e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:23 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:23.445 142336 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6cb65j3w/privsep.sock']#033[00m
Nov 24 04:54:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.125 142336 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.126 142336 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6cb65j3w/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.002 234819 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.007 234819 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.008 234819 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.009 234819 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234819#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.128 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[7255dba9-e7b5-4265-93c6-bb7d3774be5d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:24 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:24.633 234819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:24 np0005533252 nova_compute[230010]: 2025-11-24 09:54:24.772 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.193 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec58f1e-24bc-4b76-a32c-37937cbe4c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.208 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[20697b15-2abf-496a-b9f2-a1400e454a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 NetworkManager[48870]: <info>  [1763978065.2109] manager: (tap8e927f01-70): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.235 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[ab014e6f-f54a-428a-bd05-922b43b34707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 systemd-udevd[234831]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.239 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[66469ccf-6a12-423a-b9fc-4e1b7d7e919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 NetworkManager[48870]: <info>  [1763978065.2606] device (tap8e927f01-70): carrier: link connected
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.265 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e7feb07f-8fb2-4ffe-8de2-b0b3cdc575f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.281 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[661a84b4-ad31-4db5-9b93-7ae2997220ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e927f01-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:18:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400881, 'reachable_time': 36130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234849, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.296 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1243a4-a9cd-44ad-b0b9-9aef4b3c6094]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:1804'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400881, 'tstamp': 400881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234850, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.310 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6b42a785-c8c3-4abe-8cc0-a1eef06d847d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e927f01-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:18:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400881, 'reachable_time': 36130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234851, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.333 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c3d24b-7fc9-4792-a3bf-4906bfbab217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.382 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[697cca9e-e99d-4173-bea8-a7dc291b4816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.384 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e927f01-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.384 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.385 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e927f01-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:25 np0005533252 nova_compute[230010]: 2025-11-24 09:54:25.387 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 NetworkManager[48870]: <info>  [1763978065.3889] manager: (tap8e927f01-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 24 04:54:25 np0005533252 kernel: tap8e927f01-70: entered promiscuous mode
Nov 24 04:54:25 np0005533252 nova_compute[230010]: 2025-11-24 09:54:25.389 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.390 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e927f01-70, col_values=(('external_ids', {'iface-id': 'a1fde06e-6df3-4ca6-8746-8510f661dd46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:25 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:25Z|00031|binding|INFO|Releasing lport a1fde06e-6df3-4ca6-8746-8510f661dd46 from this chassis (sb_readonly=0)
Nov 24 04:54:25 np0005533252 nova_compute[230010]: 2025-11-24 09:54:25.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 nova_compute[230010]: 2025-11-24 09:54:25.404 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.406 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.407 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f28e93be-8294-47d7-a365-b4a778d0a378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.408 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-8e927f01-795d-4fd1-bd00-bd898db487a3
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/8e927f01-795d-4fd1-bd00-bd898db487a3.pid.haproxy
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 8e927f01-795d-4fd1-bd00-bd898db487a3
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:54:25 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:25.409 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'env', 'PROCESS_TAG=haproxy-8e927f01-795d-4fd1-bd00-bd898db487a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e927f01-795d-4fd1-bd00-bd898db487a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:54:25 np0005533252 nova_compute[230010]: 2025-11-24 09:54:25.639 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:25 np0005533252 podman[234884]: 2025-11-24 09:54:25.771324477 +0000 UTC m=+0.051787937 container create 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:54:25 np0005533252 systemd[1]: Started libpod-conmon-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope.
Nov 24 04:54:25 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:54:25 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4be7663a828fa7a12df69277b0f29f124c5f09c02f4aa150357d874ae378fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:54:25 np0005533252 podman[234884]: 2025-11-24 09:54:25.745324761 +0000 UTC m=+0.025788251 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:54:25 np0005533252 podman[234884]: 2025-11-24 09:54:25.842971658 +0000 UTC m=+0.123435138 container init 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:54:25 np0005533252 podman[234884]: 2025-11-24 09:54:25.849024536 +0000 UTC m=+0.129487996 container start 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 04:54:25 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : New worker (234906) forked
Nov 24 04:54:25 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : Loading success.
Nov 24 04:54:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:25.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:26.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:29 np0005533252 podman[234942]: 2025-11-24 09:54:29.345352538 +0000 UTC m=+0.072919034 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 04:54:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:29 np0005533252 nova_compute[230010]: 2025-11-24 09:54:29.775 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:54:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:54:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:30.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6333] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6341] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:54:30 np0005533252 nova_compute[230010]: 2025-11-24 09:54:30.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6352] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6356] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6364] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6370] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6374] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 04:54:30 np0005533252 NetworkManager[48870]: <info>  [1763978070.6377] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 04:54:30 np0005533252 nova_compute[230010]: 2025-11-24 09:54:30.702 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:30 np0005533252 nova_compute[230010]: 2025-11-24 09:54:30.703 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:30 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:30Z|00032|binding|INFO|Releasing lport a1fde06e-6df3-4ca6-8746-8510f661dd46 from this chassis (sb_readonly=0)
Nov 24 04:54:30 np0005533252 nova_compute[230010]: 2025-11-24 09:54:30.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:33 np0005533252 podman[234966]: 2025-11-24 09:54:33.328433029 +0000 UTC m=+0.071409957 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 04:54:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:54:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:33.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:54:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:34.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:34 np0005533252 nova_compute[230010]: 2025-11-24 09:54:34.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:35Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:46:4d 10.100.0.22
Nov 24 04:54:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:35Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:46:4d 10.100.0.22
Nov 24 04:54:35 np0005533252 nova_compute[230010]: 2025-11-24 09:54:35.705 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:35.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:39 np0005533252 nova_compute[230010]: 2025-11-24 09:54:39.782 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:40 np0005533252 nova_compute[230010]: 2025-11-24 09:54:40.710 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 podman[234997]: 2025-11-24 09:54:41.352637653 +0000 UTC m=+0.081062933 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.726 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.727 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.729 230014 INFO nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Terminating instance#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.729 230014 DEBUG nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 04:54:41 np0005533252 kernel: tap31962c69-e8 (unregistering): left promiscuous mode
Nov 24 04:54:41 np0005533252 NetworkManager[48870]: <info>  [1763978081.7862] device (tap31962c69-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 04:54:41 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:41Z|00033|binding|INFO|Releasing lport 31962c69-e86c-4431-b40a-e84cb6d9b71d from this chassis (sb_readonly=0)
Nov 24 04:54:41 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:41Z|00034|binding|INFO|Setting lport 31962c69-e86c-4431-b40a-e84cb6d9b71d down in Southbound
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.787 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 ovn_controller[132966]: 2025-11-24T09:54:41Z|00035|binding|INFO|Removing iface tap31962c69-e8 ovn-installed in OVS
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.798 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:46:4d 10.100.0.22'], port_security=['fa:16:3e:69:46:4d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '4313a8bf-5a2a-4de5-84e7-ead18a049c18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e927f01-795d-4fd1-bd00-bd898db487a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '841654bd-af9d-487b-9d46-e948edd0e4cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12eb72db-6a1a-4bb9-9912-1e510973ae62, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=31962c69-e86c-4431-b40a-e84cb6d9b71d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:54:41 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.799 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 31962c69-e86c-4431-b40a-e84cb6d9b71d in datapath 8e927f01-795d-4fd1-bd00-bd898db487a3 unbound from our chassis#033[00m
Nov 24 04:54:41 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.800 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e927f01-795d-4fd1-bd00-bd898db487a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 04:54:41 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.801 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[030ea456-7b77-4012-90fd-cd7fd8b13010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:41 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:41.802 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 namespace which is not needed anymore#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.820 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 24 04:54:41 np0005533252 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 14.456s CPU time.
Nov 24 04:54:41 np0005533252 systemd-machined[193537]: Machine qemu-1-instance-00000002 terminated.
Nov 24 04:54:41 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : haproxy version is 2.8.14-c23fe91
Nov 24 04:54:41 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [NOTICE]   (234904) : path to executable is /usr/sbin/haproxy
Nov 24 04:54:41 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [WARNING]  (234904) : Exiting Master process...
Nov 24 04:54:41 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [ALERT]    (234904) : Current worker (234906) exited with code 143 (Terminated)
Nov 24 04:54:41 np0005533252 neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3[234900]: [WARNING]  (234904) : All workers exited. Exiting... (0)
Nov 24 04:54:41 np0005533252 systemd[1]: libpod-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope: Deactivated successfully.
Nov 24 04:54:41 np0005533252 conmon[234900]: conmon 83dd7d5f66fd8162aac1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope/container/memory.events
Nov 24 04:54:41 np0005533252 podman[235043]: 2025-11-24 09:54:41.934045598 +0000 UTC m=+0.041129687 container died 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.951 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.957 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.961 230014 INFO nova.virt.libvirt.driver [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Instance destroyed successfully.#033[00m
Nov 24 04:54:41 np0005533252 nova_compute[230010]: 2025-11-24 09:54:41.963 230014 DEBUG nova.objects.instance [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 4313a8bf-5a2a-4de5-84e7-ead18a049c18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:54:41 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1-userdata-shm.mount: Deactivated successfully.
Nov 24 04:54:41 np0005533252 systemd[1]: var-lib-containers-storage-overlay-2f4be7663a828fa7a12df69277b0f29f124c5f09c02f4aa150357d874ae378fd-merged.mount: Deactivated successfully.
Nov 24 04:54:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:41 np0005533252 podman[235043]: 2025-11-24 09:54:41.977534332 +0000 UTC m=+0.084618451 container cleanup 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:54:41 np0005533252 systemd[1]: libpod-conmon-83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1.scope: Deactivated successfully.
Nov 24 04:54:42 np0005533252 podman[235082]: 2025-11-24 09:54:42.046459276 +0000 UTC m=+0.038721668 container remove 83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.048 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.048 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.049 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG oslo_concurrency.lockutils [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.050 230014 DEBUG nova.compute.manager [req-9c5d89d1-2925-4249-a94b-c279b13219f2 req-7359e52f-689f-429d-993e-1bddde47ec61 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-unplugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.052 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbce50-5db7-4f3d-8438-18300116bcb9]: (4, ('Mon Nov 24 09:54:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 (83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1)\n83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1\nMon Nov 24 09:54:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 (83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1)\n83dd7d5f66fd8162aac1fe7de57d0040058e915a91081a990d0fc68eb8eb06d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.054 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[11c4bef9-3c18-4b69-82a1-80a0d146cf5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.055 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e927f01-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:42 np0005533252 kernel: tap8e927f01-70: left promiscuous mode
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.074 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.076 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e444b7-cf65-45f9-89e9-164df4fd6e8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.092 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de19768d-a505-4501-bdd3-feec01c8a68f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.094 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[79220a0e-548b-448e-af54-714f08ad2aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.099 230014 DEBUG nova.virt.libvirt.vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:54:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-904956127',display_name='tempest-TestNetworkBasicOps-server-904956127',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-904956127',id=2,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBk6JkAuduqittiDGA4pBhSCzmrjSnKU2daRXm5XDAaZpUlbHNfHVDmOyWJWR78b4GrvBoMlHYEMPqcBJQA/sKOhpsOzfkRRFgAuDlkN09WkiLcyZB4s6iUYsG2XLZZzXw==',key_name='tempest-TestNetworkBasicOps-831372657',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:54:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-gukftcea',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:54:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=4313a8bf-5a2a-4de5-84e7-ead18a049c18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.100 230014 DEBUG nova.network.os_vif_util [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "address": "fa:16:3e:69:46:4d", "network": {"id": "8e927f01-795d-4fd1-bd00-bd898db487a3", "bridge": "br-int", "label": "tempest-network-smoke--503305566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31962c69-e8", "ovs_interfaceid": "31962c69-e86c-4431-b40a-e84cb6d9b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.101 230014 DEBUG nova.network.os_vif_util [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.102 230014 DEBUG os_vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.106 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.106 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31962c69-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.108 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.109 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b08c03d6-e07f-4768-9002-2e2d91b684e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400874, 'reachable_time': 43312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235100, 'error': None, 'target': 'ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.111 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.114 230014 INFO os_vif [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:46:4d,bridge_name='br-int',has_traffic_filtering=True,id=31962c69-e86c-4431-b40a-e84cb6d9b71d,network=Network(8e927f01-795d-4fd1-bd00-bd898db487a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31962c69-e8')#033[00m
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.120 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e927f01-795d-4fd1-bd00-bd898db487a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 04:54:42 np0005533252 systemd[1]: run-netns-ovnmeta\x2d8e927f01\x2d795d\x2d4fd1\x2dbd00\x2dbd898db487a3.mount: Deactivated successfully.
Nov 24 04:54:42 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:54:42.121 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[df88bb95-887a-48ea-a440-9b60698c406e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.518 230014 INFO nova.virt.libvirt.driver [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deleting instance files /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18_del#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.519 230014 INFO nova.virt.libvirt.driver [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deletion of /var/lib/nova/instances/4313a8bf-5a2a-4de5-84e7-ead18a049c18_del complete#033[00m
Nov 24 04:54:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:42.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.606 230014 DEBUG nova.virt.libvirt.host [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.606 230014 INFO nova.virt.libvirt.host [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] UEFI support detected#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 INFO nova.compute.manager [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG oslo.service.loopingcall [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 04:54:42 np0005533252 nova_compute[230010]: 2025-11-24 09:54:42.608 230014 DEBUG nova.network.neutron [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.240 230014 DEBUG nova.network.neutron [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.251 230014 INFO nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.292 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.292 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.335 230014 DEBUG oslo_concurrency.processutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:54:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1476535774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.789 230014 DEBUG oslo_concurrency.processutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.795 230014 DEBUG nova.compute.provider_tree [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.807 230014 DEBUG nova.scheduler.client.report [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.824 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.848 230014 INFO nova.scheduler.client.report [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 4313a8bf-5a2a-4de5-84e7-ead18a049c18#033[00m
Nov 24 04:54:43 np0005533252 nova_compute[230010]: 2025-11-24 09:54:43.903 230014 DEBUG oslo_concurrency.lockutils [None req-4b928f4d-3ada-4121-85c5-3feee966f18f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.126 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG oslo_concurrency.lockutils [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "4313a8bf-5a2a-4de5-84e7-ead18a049c18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.127 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] No waiting events found dispatching network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.128 230014 WARNING nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received unexpected event network-vif-plugged-31962c69-e86c-4431-b40a-e84cb6d9b71d for instance with vm_state deleted and task_state None.#033[00m
Nov 24 04:54:44 np0005533252 nova_compute[230010]: 2025-11-24 09:54:44.128 230014 DEBUG nova.compute.manager [req-06a12199-37fc-4dfc-99dd-0a4ebfca284f req-3c576571-d042-4490-8eed-c564b81d28d3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Received event network-vif-deleted-31962c69-e86c-4431-b40a-e84cb6d9b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:54:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:54:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:54:45 np0005533252 nova_compute[230010]: 2025-11-24 09:54:45.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:46 np0005533252 nova_compute[230010]: 2025-11-24 09:54:46.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:46 np0005533252 nova_compute[230010]: 2025-11-24 09:54:46.704 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:47 np0005533252 nova_compute[230010]: 2025-11-24 09:54:47.109 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:54:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:54:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:50 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:54:50 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362967868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:54:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:50 np0005533252 nova_compute[230010]: 2025-11-24 09:54:50.713 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:52 np0005533252 nova_compute[230010]: 2025-11-24 09:54:52.110 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:52.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:53.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:54.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:55 np0005533252 nova_compute[230010]: 2025-11-24 09:54:55.716 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:55.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:56.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:56 np0005533252 nova_compute[230010]: 2025-11-24 09:54:56.960 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978081.958483, 4313a8bf-5a2a-4de5-84e7-ead18a049c18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:54:56 np0005533252 nova_compute[230010]: 2025-11-24 09:54:56.960 230014 INFO nova.compute.manager [-] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] VM Stopped (Lifecycle Event)#033[00m
Nov 24 04:54:56 np0005533252 nova_compute[230010]: 2025-11-24 09:54:56.977 230014 DEBUG nova.compute.manager [None req-de1e3bcb-8ad7-4049-aa46-0de17f47301f - - - - - -] [instance: 4313a8bf-5a2a-4de5-84e7-ead18a049c18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:54:57 np0005533252 nova_compute[230010]: 2025-11-24 09:54:57.112 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:54:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:54:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:54:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:54:58 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:54:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:54:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:54:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:54:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:59.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:00 np0005533252 podman[235260]: 2025-11-24 09:55:00.320163686 +0000 UTC m=+0.058013068 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 04:55:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:55:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:55:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:00 np0005533252 nova_compute[230010]: 2025-11-24 09:55:00.717 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:02 np0005533252 nova_compute[230010]: 2025-11-24 09:55:02.113 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:55:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:55:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:02.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:55:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:55:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:04 np0005533252 podman[235307]: 2025-11-24 09:55:04.337209679 +0000 UTC m=+0.076178213 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 04:55:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:04.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:05 np0005533252 nova_compute[230010]: 2025-11-24 09:55:05.720 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:07 np0005533252 nova_compute[230010]: 2025-11-24 09:55:07.115 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:07 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:07.724 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:55:07 np0005533252 nova_compute[230010]: 2025-11-24 09:55:07.725 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:07 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:07.726 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:55:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:08.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:55:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:10.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:55:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:10.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:10 np0005533252 nova_compute[230010]: 2025-11-24 09:55:10.722 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 04:55:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 04:55:12 np0005533252 podman[235363]: 2025-11-24 09:55:12.088230333 +0000 UTC m=+0.051536060 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 04:55:12 np0005533252 nova_compute[230010]: 2025-11-24 09:55:12.117 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:12.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:12.728 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:55:13 np0005533252 nova_compute[230010]: 2025-11-24 09:55:13.792 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:55:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/69888656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.229 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.373 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5047MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.442 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.460 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:14.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:55:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1018635266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.876 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.882 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.897 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.936 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:55:14 np0005533252 nova_compute[230010]: 2025-11-24 09:55:14.936 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:55:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:55:15 np0005533252 nova_compute[230010]: 2025-11-24 09:55:15.722 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:15 np0005533252 nova_compute[230010]: 2025-11-24 09:55:15.937 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:16.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:16 np0005533252 nova_compute[230010]: 2025-11-24 09:55:16.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:16 np0005533252 nova_compute[230010]: 2025-11-24 09:55:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:55:16 np0005533252 nova_compute[230010]: 2025-11-24 09:55:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:55:16 np0005533252 nova_compute[230010]: 2025-11-24 09:55:16.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:55:16 np0005533252 nova_compute[230010]: 2025-11-24 09:55:16.781 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:17 np0005533252 nova_compute[230010]: 2025-11-24 09:55:17.119 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:17 np0005533252 nova_compute[230010]: 2025-11-24 09:55:17.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:18.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:18 np0005533252 nova_compute[230010]: 2025-11-24 09:55:18.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:18 np0005533252 nova_compute[230010]: 2025-11-24 09:55:18.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:55:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:20.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:20.056 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:20.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:20 np0005533252 nova_compute[230010]: 2025-11-24 09:55:20.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.207 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.208 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.225 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.292 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.292 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.299 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.299 230014 INFO nova.compute.claims [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.378 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.798 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.806 230014 DEBUG nova.compute.provider_tree [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.826 230014 DEBUG nova.scheduler.client.report [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.845 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.847 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.892 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.893 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.911 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 04:55:21 np0005533252 nova_compute[230010]: 2025-11-24 09:55:21.926 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.014 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.016 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.018 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating image(s)#033[00m
Nov 24 04:55:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.004000095s ======
Nov 24 04:55:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:22.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000095s
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.044 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.067 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.087 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.090 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.121 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.144 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.145 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.146 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.146 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.172 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.176 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 8e009e75-a97b-4c5d-a470-5db1137cb407_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.276 230014 DEBUG nova.policy [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.416 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 8e009e75-a97b-4c5d-a470-5db1137cb407_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.487 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.572 230014 DEBUG nova.objects.instance [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Ensure instance console log exists: /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.584 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.585 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.585 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:22 np0005533252 nova_compute[230010]: 2025-11-24 09:55:22.885 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully created port: e962e27f-80bf-4103-98ae-d8af84c6fc28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.735 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully updated port: e962e27f-80bf-4103-98ae-d8af84c6fc28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.751 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.825 230014 DEBUG nova.compute.manager [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.825 230014 DEBUG nova.compute.manager [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.826 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:55:23 np0005533252 nova_compute[230010]: 2025-11-24 09:55:23.882 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 04:55:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.603 230014 DEBUG nova.network.neutron [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:55:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:24.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.621 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance network_info: |[{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.622 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.625 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start _get_guest_xml network_info=[{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.629 230014 WARNING nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.634 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.635 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.643 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.644 230014 DEBUG nova.virt.libvirt.host [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.644 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.645 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.645 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.646 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.647 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.648 230014 DEBUG nova.virt.hardware [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 04:55:24 np0005533252 nova_compute[230010]: 2025-11-24 09:55:24.652 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:55:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3838070251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.174 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.204 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.208 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:55:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/861026441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.659 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.661 230014 DEBUG nova.virt.libvirt.vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:55:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.661 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.662 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.663 230014 DEBUG nova.objects.instance [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.679 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] End _get_guest_xml xml=<domain type="kvm">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <name>instance-00000003</name>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 09:55:24</nova:creationTime>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="serial">8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="uuid">8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:a4:f1:71"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <target dev="tape962e27f-80"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log" append="off"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 04:55:25 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:55:25 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:55:25 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:55:25 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.680 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Preparing to wait for external event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.680 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.681 230014 DEBUG nova.virt.libvirt.vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:55:21Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.682 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.682 230014 DEBUG nova.network.os_vif_util [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG os_vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.683 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.684 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.687 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.687 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape962e27f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.688 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape962e27f-80, col_values=(('external_ids', {'iface-id': 'e962e27f-80bf-4103-98ae-d8af84c6fc28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:f1:71', 'vm-uuid': '8e009e75-a97b-4c5d-a470-5db1137cb407'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:25 np0005533252 NetworkManager[48870]: <info>  [1763978125.7128] manager: (tape962e27f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.716 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.721 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.722 230014 INFO os_vif [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80')#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.760 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a4:f1:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.761 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Using config drive#033[00m
Nov 24 04:55:25 np0005533252 nova_compute[230010]: 2025-11-24 09:55:25.788 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.081 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.082 230014 DEBUG nova.network.neutron [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.098 230014 DEBUG oslo_concurrency.lockutils [req-d343bef7-8df3-4da0-bffb-87574abdc6e6 req-7f09d447-5261-48b3-99dd-53431fd84f40 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.262 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Creating config drive at /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.267 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c178wmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.395 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c178wmp" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.429 230014 DEBUG nova.storage.rbd_utils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.433 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.597 230014 DEBUG oslo_concurrency.processutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config 8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.598 230014 INFO nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deleting local config drive /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/disk.config because it was imported into RBD.#033[00m
Nov 24 04:55:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:26 np0005533252 kernel: tape962e27f-80: entered promiscuous mode
Nov 24 04:55:26 np0005533252 NetworkManager[48870]: <info>  [1763978126.6636] manager: (tape962e27f-80): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.666 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:26 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:26Z|00036|binding|INFO|Claiming lport e962e27f-80bf-4103-98ae-d8af84c6fc28 for this chassis.
Nov 24 04:55:26 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:26Z|00037|binding|INFO|e962e27f-80bf-4103-98ae-d8af84c6fc28: Claiming fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.672 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.674 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.684 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:f1:71 10.100.0.6'], port_security=['fa:16:3e:a4:f1:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-636fec29-e18e-45f1-aabc-369f5fd0d593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08677d44-dac1-4cc6-ac2a-f951a8415b1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab95497-29d8-4481-acd1-a71d08bb0310, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=e962e27f-80bf-4103-98ae-d8af84c6fc28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.685 142336 INFO neutron.agent.ovn.metadata.agent [-] Port e962e27f-80bf-4103-98ae-d8af84c6fc28 in datapath 636fec29-e18e-45f1-aabc-369f5fd0d593 bound to our chassis#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.686 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 636fec29-e18e-45f1-aabc-369f5fd0d593#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.695 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[57f931bd-64f4-425e-a6bf-4c020a995d12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.696 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap636fec29-e1 in ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.697 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap636fec29-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.697 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[0f11a6da-0ed3-41a2-aaab-2f270d30c40a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.699 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3e322adc-7e8a-447a-9cf7-651ace0d462a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 systemd-udevd[235782]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:55:26 np0005533252 systemd-machined[193537]: New machine qemu-2-instance-00000003.
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.712 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[fb273dea-7856-4ca7-9692-a649ba066d84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 NetworkManager[48870]: <info>  [1763978126.7197] device (tape962e27f-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:55:26 np0005533252 NetworkManager[48870]: <info>  [1763978126.7218] device (tape962e27f-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:55:26 np0005533252 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.740 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[df66db83-f7e4-4d13-ac5e-bb3a248b8b7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.749 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:26 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:26Z|00038|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 ovn-installed in OVS
Nov 24 04:55:26 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:26Z|00039|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 up in Southbound
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.755 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.772 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[fa61ddb0-cdfd-48f7-9ced-6b575cae3205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.778 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f62133d9-09cf-4b80-b950-5d6b009d4d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 NetworkManager[48870]: <info>  [1763978126.7804] manager: (tap636fec29-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.815 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[40e8d140-65d8-4afe-94ce-9b20e831f44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.820 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ecc4f7-3a11-4c3c-9336-41995b7353a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 NetworkManager[48870]: <info>  [1763978126.8461] device (tap636fec29-e0): carrier: link connected
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.852 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[690bad71-3be6-4ffa-8d0e-6e6990ab896c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.875 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ec0a3b-0b74-4623-8be4-0fb838e6a84d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap636fec29-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:23:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407039, 'reachable_time': 23832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235814, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.893 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a20af0ad-4e22-4afb-9fb6-077693dc244c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:23e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407039, 'tstamp': 407039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235815, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.914 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[4695b96c-d2ea-4664-a63d-7ebd0f854f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap636fec29-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:23:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407039, 'reachable_time': 23832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235816, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.937 230014 DEBUG nova.compute.manager [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.938 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.938 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.939 230014 DEBUG oslo_concurrency.lockutils [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:26 np0005533252 nova_compute[230010]: 2025-11-24 09:55:26.939 230014 DEBUG nova.compute.manager [req-90a6330a-ef3a-45e6-9a9a-00673397ee49 req-0d86d481-d2c9-4543-a643-e981647f39c6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Processing event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 04:55:26 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:26.953 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcfb7e8-fa9b-4069-98ae-0cbf9d16a6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.020 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f4207494-6ea8-4f69-bdf8-561abbf87a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.022 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap636fec29-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.022 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.023 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap636fec29-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.025 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:27 np0005533252 NetworkManager[48870]: <info>  [1763978127.0269] manager: (tap636fec29-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 24 04:55:27 np0005533252 kernel: tap636fec29-e0: entered promiscuous mode
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.027 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.032 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap636fec29-e0, col_values=(('external_ids', {'iface-id': '9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.045 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.047 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:27 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:27Z|00040|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.049 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.052 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1feda7d0-8c95-475f-a7ae-e6f3f0c42e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.053 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-636fec29-e18e-45f1-aabc-369f5fd0d593
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/636fec29-e18e-45f1-aabc-369f5fd0d593.pid.haproxy
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 636fec29-e18e-45f1-aabc-369f5fd0d593
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:55:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:55:27.054 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'env', 'PROCESS_TAG=haproxy-636fec29-e18e-45f1-aabc-369f5fd0d593', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/636fec29-e18e-45f1-aabc-369f5fd0d593.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.059 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:27 np0005533252 podman[235855]: 2025-11-24 09:55:27.495466713 +0000 UTC m=+0.054483114 container create 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:55:27 np0005533252 systemd[1]: Started libpod-conmon-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope.
Nov 24 04:55:27 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:55:27 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d411d6d830a7518401f278920271906d8077c3d66c4ff88eb3a65c883bda73c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:55:27 np0005533252 podman[235855]: 2025-11-24 09:55:27.468123435 +0000 UTC m=+0.027139856 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:55:27 np0005533252 podman[235855]: 2025-11-24 09:55:27.566686034 +0000 UTC m=+0.125702445 container init 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 04:55:27 np0005533252 podman[235855]: 2025-11-24 09:55:27.575382526 +0000 UTC m=+0.134398927 container start 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:55:27 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : New worker (235912) forked
Nov 24 04:55:27 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : Loading success.
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.621 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.623 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.620338, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.623 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Started (Lifecycle Event)#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.627 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.632 230014 INFO nova.virt.libvirt.driver [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance spawned successfully.#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.632 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.641 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.648 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.652 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.653 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.654 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.655 230014 DEBUG nova.virt.libvirt.driver [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.678 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.679 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.6217556, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.679 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Paused (Lifecycle Event)#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.701 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.705 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978127.6270826, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.705 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Resumed (Lifecycle Event)#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.722 230014 INFO nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 5.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.723 230014 DEBUG nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.724 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.731 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.759 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.782 230014 INFO nova.compute.manager [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 6.51 seconds to build instance.#033[00m
Nov 24 04:55:27 np0005533252 nova_compute[230010]: 2025-11-24 09:55:27.794 230014 DEBUG oslo_concurrency.lockutils [None req-86e3bf7b-0e0f-4d34-a098-2c7fd3877eac 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:28.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:28.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.016 230014 DEBUG nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.017 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.017 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 DEBUG oslo_concurrency.lockutils [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 DEBUG nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:55:29 np0005533252 nova_compute[230010]: 2025-11-24 09:55:29.018 230014 WARNING nova.compute.manager [req-bd7a9114-0c1b-4057-80fa-07b20afad1c9 req-1a6981c0-cd3a-406a-ba12-6393ed89d3ab 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:55:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:55:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:55:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:30.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:30 np0005533252 nova_compute[230010]: 2025-11-24 09:55:30.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:30 np0005533252 nova_compute[230010]: 2025-11-24 09:55:30.727 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:31 np0005533252 podman[235924]: 2025-11-24 09:55:31.329096741 +0000 UTC m=+0.066945028 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:55:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:32.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:32 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:32Z|00041|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 04:55:32 np0005533252 NetworkManager[48870]: <info>  [1763978132.4696] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 24 04:55:32 np0005533252 NetworkManager[48870]: <info>  [1763978132.4708] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.468 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:32 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:32Z|00042|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.508 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.514 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.690 230014 DEBUG nova.compute.manager [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG nova.compute.manager [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:55:32 np0005533252 nova_compute[230010]: 2025-11-24 09:55:32.691 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:55:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:34 np0005533252 nova_compute[230010]: 2025-11-24 09:55:34.438 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:55:34 np0005533252 nova_compute[230010]: 2025-11-24 09:55:34.439 230014 DEBUG nova.network.neutron [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:55:34 np0005533252 nova_compute[230010]: 2025-11-24 09:55:34.462 230014 DEBUG oslo_concurrency.lockutils [req-028bf575-eecc-4078-ae42-00d76a1f067e req-edf32178-6b99-4ce9-a552-7820a65dfb00 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:55:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:35 np0005533252 podman[235948]: 2025-11-24 09:55:35.337958163 +0000 UTC m=+0.077483885 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:55:35 np0005533252 nova_compute[230010]: 2025-11-24 09:55:35.757 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:38.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:40.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:40 np0005533252 nova_compute[230010]: 2025-11-24 09:55:40.758 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:40 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 04:55:40 np0005533252 ovn_controller[132966]: 2025-11-24T09:55:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:f1:71 10.100.0.6
Nov 24 04:55:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:55:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:55:42 np0005533252 podman[235978]: 2025-11-24 09:55:42.335465424 +0000 UTC m=+0.070319760 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 04:55:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:42.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:44.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:44.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:55:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:55:45 np0005533252 nova_compute[230010]: 2025-11-24 09:55:45.761 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:55:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:46.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:55:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:46.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.124 230014 INFO nova.compute.manager [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Get console output#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.130 230014 INFO oslo.privsep.daemon [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9rh8lyep/privsep.sock']#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.853 230014 INFO oslo.privsep.daemon [None req-5d1b84b3-86be-48aa-8fdf-0c6f72c489ed 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.699 236028 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.703 236028 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.705 236028 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.706 236028 INFO oslo.privsep.daemon [-] privsep daemon running as pid 236028#033[00m
Nov 24 04:55:47 np0005533252 nova_compute[230010]: 2025-11-24 09:55:47.940 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 04:55:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:48.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:50.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:50.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:50 np0005533252 nova_compute[230010]: 2025-11-24 09:55:50.764 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:55:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:52.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:52.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:55:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:55:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:55:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.766 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.769 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:55:55 np0005533252 nova_compute[230010]: 2025-11-24 09:55:55.771 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:55:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:56.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:56 np0005533252 nova_compute[230010]: 2025-11-24 09:55:56.861 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:55:56 np0005533252 nova_compute[230010]: 2025-11-24 09:55:56.862 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:55:56 np0005533252 nova_compute[230010]: 2025-11-24 09:55:56.862 230014 DEBUG nova.objects.instance [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:55:57 np0005533252 nova_compute[230010]: 2025-11-24 09:55:57.434 230014 DEBUG nova.objects.instance [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:55:57 np0005533252 nova_compute[230010]: 2025-11-24 09:55:57.449 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:55:57 np0005533252 nova_compute[230010]: 2025-11-24 09:55:57.595 230014 DEBUG nova.policy [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:55:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:58 np0005533252 nova_compute[230010]: 2025-11-24 09:55:58.338 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully created port: faa80fbe-f017-47cd-96c8-ca0747a39410 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:55:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:55:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:55:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:55:58 np0005533252 nova_compute[230010]: 2025-11-24 09:55:58.993 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Successfully updated port: faa80fbe-f017-47cd-96c8-ca0747a39410 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.005 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.092 230014 DEBUG nova.compute.manager [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.093 230014 DEBUG nova.compute.manager [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-faa80fbe-f017-47cd-96c8-ca0747a39410. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:55:59 np0005533252 nova_compute[230010]: 2025-11-24 09:55:59.093 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:55:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:00.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:56:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.627 230014 DEBUG nova.network.neutron [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.643 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.644 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.644 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port faa80fbe-f017-47cd-96c8-ca0747a39410 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.647 230014 DEBUG nova.virt.libvirt.vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.648 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.648 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.649 230014 DEBUG os_vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.649 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.650 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.650 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:56:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:00.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaa80fbe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.658 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaa80fbe-f0, col_values=(('external_ids', {'iface-id': 'faa80fbe-f017-47cd-96c8-ca0747a39410', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:de:6c', 'vm-uuid': '8e009e75-a97b-4c5d-a470-5db1137cb407'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.660 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.6615] manager: (tapfaa80fbe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.663 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.666 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.667 230014 INFO os_vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.668 230014 DEBUG nova.virt.libvirt.vif [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.668 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.669 230014 DEBUG nova.network.os_vif_util [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.672 230014 DEBUG nova.virt.libvirt.guest [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] attach device xml: <interface type="ethernet">
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:23:de:6c"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <target dev="tapfaa80fbe-f0"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:56:00 np0005533252 nova_compute[230010]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 24 04:56:00 np0005533252 kernel: tapfaa80fbe-f0: entered promiscuous mode
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.6842] manager: (tapfaa80fbe-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 24 04:56:00 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:00Z|00043|binding|INFO|Claiming lport faa80fbe-f017-47cd-96c8-ca0747a39410 for this chassis.
Nov 24 04:56:00 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:00Z|00044|binding|INFO|faa80fbe-f017-47cd-96c8-ca0747a39410: Claiming fa:16:3e:23:de:6c 10.100.0.29
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.687 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.694 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:de:6c 10.100.0.29'], port_security=['fa:16:3e:23:de:6c 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3401f0a-5661-425e-b817-1a9ea0eafa9c, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=faa80fbe-f017-47cd-96c8-ca0747a39410) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.696 142336 INFO neutron.agent.ovn.metadata.agent [-] Port faa80fbe-f017-47cd-96c8-ca0747a39410 in datapath 2dfea9d1-73f4-435f-ade1-dce53efe0c39 bound to our chassis#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.697 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2dfea9d1-73f4-435f-ade1-dce53efe0c39#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.710 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2705d1-68d0-4f02-87ae-6fc157d7f5ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.710 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2dfea9d1-71 in ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.711 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2dfea9d1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.712 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ce431c8f-ffdc-4e5f-a2b7-937d0f8d3ef2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.712 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[df5f86fa-9775-4460-afc5-6a2617a7fe02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 systemd-udevd[236045]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.725 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cf6f4a-246e-415b-8949-a6484fd388da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:00Z|00045|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 ovn-installed in OVS
Nov 24 04:56:00 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:00Z|00046|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 up in Southbound
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.7361] device (tapfaa80fbe-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.7380] device (tapfaa80fbe-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.750 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2c5d0-08f2-4be9-bd14-5ae0ce6aa90f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a4:f1:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.770 230014 DEBUG nova.virt.libvirt.driver [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:23:de:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.772 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.776 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[bc02ad44-d6d0-4084-a4df-ac9b8dad3735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 systemd-udevd[236048]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.7820] manager: (tap2dfea9d1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.782 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[935387a5-024b-44ba-b3a8-c39d0c98cea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.803 230014 DEBUG nova.virt.libvirt.guest [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:00 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 04:56:00 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:00 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:00 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:00 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.809 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcf3d51-7575-4b72-badf-524d0526dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.811 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5813e6e7-97ee-4f33-a80b-782cd6c1c5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.8352] device (tap2dfea9d1-70): carrier: link connected
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.835 230014 DEBUG oslo_concurrency.lockutils [None req-edf79fa3-8b3d-40ef-872f-238a87ce07d9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.839 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d173bdad-96e9-4cd5-8c4a-b8a98f91a319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.854 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[be311bdc-8d86-4184-a1d7-6e953874c332]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dfea9d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b1:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410438, 'reachable_time': 32674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236071, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.868 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f0742820-0528-4261-8a9e-feb2bd09c64f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:b157'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410438, 'tstamp': 410438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236072, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.887 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[37ee95bc-b813-4864-80cb-408a1394c8cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dfea9d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b1:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410438, 'reachable_time': 32674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236073, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.918 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[13450c87-8524-4553-9185-de5094ada784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.971 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[94cf838d-0408-4291-b03a-46eb45ac5bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.972 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dfea9d1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.972 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.973 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dfea9d1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.974 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 NetworkManager[48870]: <info>  [1763978160.9751] manager: (tap2dfea9d1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 24 04:56:00 np0005533252 kernel: tap2dfea9d1-70: entered promiscuous mode
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.977 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.977 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2dfea9d1-70, col_values=(('external_ids', {'iface-id': '1c47826d-7d98-41f9-bde5-d6e4ced7b639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.978 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:00Z|00047|binding|INFO|Releasing lport 1c47826d-7d98-41f9-bde5-d6e4ced7b639 from this chassis (sb_readonly=0)
Nov 24 04:56:00 np0005533252 nova_compute[230010]: 2025-11-24 09:56:00.991 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.992 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.993 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5861299b-a240-4c29-bfad-f6508d974585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.994 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-2dfea9d1-73f4-435f-ade1-dce53efe0c39
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/2dfea9d1-73f4-435f-ade1-dce53efe0c39.pid.haproxy
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 2dfea9d1-73f4-435f-ade1-dce53efe0c39
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:56:00 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:00.994 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'env', 'PROCESS_TAG=haproxy-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2dfea9d1-73f4-435f-ade1-dce53efe0c39.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:56:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 04:56:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 04:56:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 04:56:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.170 230014 DEBUG nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.171 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.172 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.172 230014 DEBUG oslo_concurrency.lockutils [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.173 230014 DEBUG nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.174 230014 WARNING nova.compute.manager [req-9312f730-fd44-40ee-89dd-87e6cd5b55b6 req-060b8118-81e3-4c58-9b51-1883427682a2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:56:01 np0005533252 podman[236105]: 2025-11-24 09:56:01.359935635 +0000 UTC m=+0.044039199 container create 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 04:56:01 np0005533252 systemd[1]: Started libpod-conmon-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope.
Nov 24 04:56:01 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:56:01 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/159129d6836993ad22c9e4f7333101e324e380206278a56bdc9a5ccf6ea421e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:56:01 np0005533252 podman[236105]: 2025-11-24 09:56:01.423230832 +0000 UTC m=+0.107334436 container init 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 04:56:01 np0005533252 podman[236105]: 2025-11-24 09:56:01.430364546 +0000 UTC m=+0.114468110 container start 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 04:56:01 np0005533252 podman[236105]: 2025-11-24 09:56:01.335832765 +0000 UTC m=+0.019936359 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:56:01 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : New worker (236144) forked
Nov 24 04:56:01 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : Loading success.
Nov 24 04:56:01 np0005533252 podman[236118]: 2025-11-24 09:56:01.455261245 +0000 UTC m=+0.057727973 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.979 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port faa80fbe-f017-47cd-96c8-ca0747a39410. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.980 230014 DEBUG nova.network.neutron [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:01 np0005533252 nova_compute[230010]: 2025-11-24 09:56:01.997 230014 DEBUG oslo_concurrency.lockutils [req-53c9d336-2387-470f-bea6-87d4b3cc617a req-e43fcafc-9e08-4c4c-b52a-c499591a77a3 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:56:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:02.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.352 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.353 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.367 230014 DEBUG nova.objects.instance [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.388 230014 DEBUG nova.virt.libvirt.vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.388 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.389 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.393 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.395 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.397 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Attempting to detach device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.397 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:23:de:6c"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <target dev="tapfaa80fbe-f0"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.405 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.409 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <name>instance-00000003</name>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='tape962e27f-80'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:23:de:6c'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='tapfaa80fbe-f0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='net1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.410 230014 INFO nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the persistent domain config.#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.411 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] (1/8): Attempting to detach device tapfaa80fbe-f0 with device alias net1 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.411 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:23:de:6c"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <target dev="tapfaa80fbe-f0"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 24 04:56:02 np0005533252 kernel: tapfaa80fbe-f0 (unregistering): left promiscuous mode
Nov 24 04:56:02 np0005533252 NetworkManager[48870]: <info>  [1763978162.4568] device (tapfaa80fbe-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 04:56:02 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:02Z|00048|binding|INFO|Releasing lport faa80fbe-f017-47cd-96c8-ca0747a39410 from this chassis (sb_readonly=0)
Nov 24 04:56:02 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:02Z|00049|binding|INFO|Setting lport faa80fbe-f017-47cd-96c8-ca0747a39410 down in Southbound
Nov 24 04:56:02 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:02Z|00050|binding|INFO|Removing iface tapfaa80fbe-f0 ovn-installed in OVS
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.463 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.469 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Received event <DeviceRemovedEvent: 1763978162.469293, 8e009e75-a97b-4c5d-a470-5db1137cb407 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.469 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:de:6c 10.100.0.29'], port_security=['fa:16:3e:23:de:6c 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3401f0a-5661-425e-b817-1a9ea0eafa9c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=faa80fbe-f017-47cd-96c8-ca0747a39410) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.470 230014 DEBUG nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Start waiting for the detach event from libvirt for device tapfaa80fbe-f0 with device alias net1 for instance 8e009e75-a97b-4c5d-a470-5db1137cb407 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.471 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.471 142336 INFO neutron.agent.ovn.metadata.agent [-] Port faa80fbe-f017-47cd-96c8-ca0747a39410 in datapath 2dfea9d1-73f4-435f-ade1-dce53efe0c39 unbound from our chassis#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.473 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2dfea9d1-73f4-435f-ade1-dce53efe0c39, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.474 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[60ed0e81-8c8b-4d5e-b285-7d2e428847f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.475 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 namespace which is not needed anymore#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <name>instance-00000003</name>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:00</nova:creationTime>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:port uuid="faa80fbe-f017-47cd-96c8-ca0747a39410">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target dev='tape962e27f-80'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 INFO nova.virt.libvirt.driver [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tapfaa80fbe-f0 from instance 8e009e75-a97b-4c5d-a470-5db1137cb407 from the live domain config.#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.475 230014 DEBUG nova.virt.libvirt.vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG nova.network.os_vif_util [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.476 230014 DEBUG os_vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.478 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.478 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa80fbe-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.480 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.481 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.483 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.486 230014 INFO os_vif [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.487 230014 DEBUG nova.virt.libvirt.guest [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:02 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:02 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:02 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 04:56:02 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : haproxy version is 2.8.14-c23fe91
Nov 24 04:56:02 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [NOTICE]   (236138) : path to executable is /usr/sbin/haproxy
Nov 24 04:56:02 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [WARNING]  (236138) : Exiting Master process...
Nov 24 04:56:02 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [ALERT]    (236138) : Current worker (236144) exited with code 143 (Terminated)
Nov 24 04:56:02 np0005533252 neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39[236121]: [WARNING]  (236138) : All workers exited. Exiting... (0)
Nov 24 04:56:02 np0005533252 systemd[1]: libpod-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope: Deactivated successfully.
Nov 24 04:56:02 np0005533252 podman[236176]: 2025-11-24 09:56:02.597211634 +0000 UTC m=+0.040366308 container died 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 04:56:02 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b-userdata-shm.mount: Deactivated successfully.
Nov 24 04:56:02 np0005533252 systemd[1]: var-lib-containers-storage-overlay-159129d6836993ad22c9e4f7333101e324e380206278a56bdc9a5ccf6ea421e5-merged.mount: Deactivated successfully.
Nov 24 04:56:02 np0005533252 podman[236176]: 2025-11-24 09:56:02.641119948 +0000 UTC m=+0.084274622 container cleanup 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 04:56:02 np0005533252 systemd[1]: libpod-conmon-2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b.scope: Deactivated successfully.
Nov 24 04:56:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:02 np0005533252 podman[236207]: 2025-11-24 09:56:02.701040163 +0000 UTC m=+0.041126437 container remove 2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.706 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[7abea4c7-bb38-443f-bd86-1b5857e2d5cd]: (4, ('Mon Nov 24 09:56:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 (2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b)\n2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b\nMon Nov 24 09:56:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 (2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b)\n2a59cf21a7f5a89b1e1aad9ebbfb34d0daec952f197a040b20728d4d777f604b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.708 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c361bbd4-1601-468f-976b-4fbdc4abedcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.709 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dfea9d1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.711 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 kernel: tap2dfea9d1-70: left promiscuous mode
Nov 24 04:56:02 np0005533252 nova_compute[230010]: 2025-11-24 09:56:02.725 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.728 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de644d23-cca5-4586-ab22-46df93e94259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.740 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[825db17b-c595-4852-bdf0-275b7316905e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.741 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ff2d11-bb76-44ea-8973-8d0455119a9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.754 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f8eec8-83c8-4a3e-8f83-7bd6895c2792]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410432, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236243, 'error': None, 'target': 'ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:02 np0005533252 systemd[1]: run-netns-ovnmeta\x2d2dfea9d1\x2d73f4\x2d435f\x2dade1\x2ddce53efe0c39.mount: Deactivated successfully.
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.758 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2dfea9d1-73f4-435f-ade1-dce53efe0c39 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 04:56:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:02.758 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[d54bc886-5e7b-4d81-9c4c-d2b93d271faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.426 230014 DEBUG nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.427 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.427 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 DEBUG oslo_concurrency.lockutils [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 DEBUG nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:03 np0005533252 nova_compute[230010]: 2025-11-24 09:56:03.428 230014 WARNING nova.compute.manager [req-88adbbf0-ae7a-4727-b6f3-ffe3bf466b8d req-05d96d72-5082-4dd7-a497-1fbd525a43b5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:56:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:56:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:04 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:56:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:56:04 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:56:04 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:56:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.316 230014 DEBUG nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.522 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.523 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 WARNING nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-unplugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.524 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG oslo_concurrency.lockutils [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.525 230014 WARNING nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-faa80fbe-f017-47cd-96c8-ca0747a39410 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 DEBUG nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-deleted-faa80fbe-f017-47cd-96c8-ca0747a39410 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 INFO nova.compute.manager [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Neutron deleted interface faa80fbe-f017-47cd-96c8-ca0747a39410; detaching it from the instance and deleting it from the info cache#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.526 230014 DEBUG nova.network.neutron [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.544 230014 DEBUG nova.objects.instance [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'system_metadata' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.586 230014 DEBUG nova.objects.instance [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'flavor' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.623 230014 DEBUG nova.virt.libvirt.vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.624 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.624 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.627 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.630 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <name>instance-00000003</name>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='tape962e27f-80'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.632 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.634 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:23:de:6c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfaa80fbe-f0"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <name>instance-00000003</name>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <uuid>8e009e75-a97b-4c5d-a470-5db1137cb407</uuid>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:02</nova:creationTime>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='serial'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='uuid'>8e009e75-a97b-4c5d-a470-5db1137cb407</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk' index='2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/8e009e75-a97b-4c5d-a470-5db1137cb407_disk.config' index='1'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:a4:f1:71'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target dev='tape962e27f-80'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407/console.log' append='off'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c684,c1005</label>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c684,c1005</imagelabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.636 230014 WARNING nova.virt.libvirt.driver [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Detaching interface fa:16:3e:23:de:6c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfaa80fbe-f0' not found.#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.637 230014 DEBUG nova.virt.libvirt.vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.638 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "faa80fbe-f017-47cd-96c8-ca0747a39410", "address": "fa:16:3e:23:de:6c", "network": {"id": "2dfea9d1-73f4-435f-ade1-dce53efe0c39", "bridge": "br-int", "label": "tempest-network-smoke--1540330946", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaa80fbe-f0", "ovs_interfaceid": "faa80fbe-f017-47cd-96c8-ca0747a39410", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.638 230014 DEBUG nova.network.os_vif_util [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.639 230014 DEBUG os_vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaa80fbe-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.641 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.644 230014 INFO os_vif [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:de:6c,bridge_name='br-int',has_traffic_filtering=True,id=faa80fbe-f017-47cd-96c8-ca0747a39410,network=Network(2dfea9d1-73f4-435f-ade1-dce53efe0c39),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaa80fbe-f0')#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.645 230014 DEBUG nova.virt.libvirt.guest [req-56fc6d37-cf30-4c14-a29e-b1cc85615ec9 req-cf01728a-61d3-4a6c-8614-d109bc7d0fb0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1469091475</nova:name>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:56:05</nova:creationTime>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    <nova:port uuid="e962e27f-80bf-4103-98ae-d8af84c6fc28">
Nov 24 04:56:05 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:56:05 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:56:05 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 04:56:05 np0005533252 nova_compute[230010]: 2025-11-24 09:56:05.771 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:06 np0005533252 podman[236306]: 2025-11-24 09:56:06.340347251 +0000 UTC m=+0.083465572 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 04:56:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:07 np0005533252 nova_compute[230010]: 2025-11-24 09:56:07.481 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:07 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:07Z|00051|binding|INFO|Releasing lport 9bf2a93f-cf2b-4180-87cd-4fecaf4abe0b from this chassis (sb_readonly=0)
Nov 24 04:56:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:56:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:56:07 np0005533252 nova_compute[230010]: 2025-11-24 09:56:07.758 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.316 230014 INFO nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Port faa80fbe-f017-47cd-96c8-ca0747a39410 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.316 230014 DEBUG nova.network.neutron [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.334 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.350 230014 DEBUG oslo_concurrency.lockutils [None req-1f39edd7-ccec-46ee-b7bf-91daf5c48308 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-8e009e75-a97b-4c5d-a470-5db1137cb407-faa80fbe-f017-47cd-96c8-ca0747a39410" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:56:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.781 230014 DEBUG nova.compute.manager [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG nova.compute.manager [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing instance network info cache due to event network-changed-e962e27f-80bf-4103-98ae-d8af84c6fc28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.782 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Refreshing network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.845 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.845 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.846 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.847 230014 INFO nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Terminating instance#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.848 230014 DEBUG nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 04:56:08 np0005533252 kernel: tape962e27f-80 (unregistering): left promiscuous mode
Nov 24 04:56:08 np0005533252 NetworkManager[48870]: <info>  [1763978168.8897] device (tape962e27f-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.935 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:08 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:08Z|00052|binding|INFO|Releasing lport e962e27f-80bf-4103-98ae-d8af84c6fc28 from this chassis (sb_readonly=0)
Nov 24 04:56:08 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:08Z|00053|binding|INFO|Setting lport e962e27f-80bf-4103-98ae-d8af84c6fc28 down in Southbound
Nov 24 04:56:08 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:08Z|00054|binding|INFO|Removing iface tape962e27f-80 ovn-installed in OVS
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.937 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:08 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.947 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:f1:71 10.100.0.6'], port_security=['fa:16:3e:a4:f1:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8e009e75-a97b-4c5d-a470-5db1137cb407', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-636fec29-e18e-45f1-aabc-369f5fd0d593', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08677d44-dac1-4cc6-ac2a-f951a8415b1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cab95497-29d8-4481-acd1-a71d08bb0310, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=e962e27f-80bf-4103-98ae-d8af84c6fc28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:56:08 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.949 142336 INFO neutron.agent.ovn.metadata.agent [-] Port e962e27f-80bf-4103-98ae-d8af84c6fc28 in datapath 636fec29-e18e-45f1-aabc-369f5fd0d593 unbound from our chassis#033[00m
Nov 24 04:56:08 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.950 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 636fec29-e18e-45f1-aabc-369f5fd0d593, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 04:56:08 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.951 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d47bfd85-fcff-4842-bdab-6ba8a3aa665d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:08 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:08.951 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 namespace which is not needed anymore#033[00m
Nov 24 04:56:08 np0005533252 nova_compute[230010]: 2025-11-24 09:56:08.953 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:08 np0005533252 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 24 04:56:08 np0005533252 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.671s CPU time.
Nov 24 04:56:08 np0005533252 systemd-machined[193537]: Machine qemu-2-instance-00000003 terminated.
Nov 24 04:56:09 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : haproxy version is 2.8.14-c23fe91
Nov 24 04:56:09 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [NOTICE]   (235909) : path to executable is /usr/sbin/haproxy
Nov 24 04:56:09 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [WARNING]  (235909) : Exiting Master process...
Nov 24 04:56:09 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [ALERT]    (235909) : Current worker (235912) exited with code 143 (Terminated)
Nov 24 04:56:09 np0005533252 neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593[235902]: [WARNING]  (235909) : All workers exited. Exiting... (0)
Nov 24 04:56:09 np0005533252 systemd[1]: libpod-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope: Deactivated successfully.
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.077 230014 INFO nova.virt.libvirt.driver [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Instance destroyed successfully.#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.077 230014 DEBUG nova.objects.instance [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 8e009e75-a97b-4c5d-a470-5db1137cb407 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:56:09 np0005533252 podman[236411]: 2025-11-24 09:56:09.079793837 +0000 UTC m=+0.043898654 container died 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.092 230014 DEBUG nova.virt.libvirt.vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:55:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1469091475',display_name='tempest-TestNetworkBasicOps-server-1469091475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1469091475',id=3,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZZQMjyTSwgkfidZfPhBPgBcV62YzWExHHXsl1BnsLfJjAX1c531QA8puLkgpD93eEa7lPae/Gh1kFnVkWZAW6FTPgZg7BzeD7RovkQcC7HReAVJUg962qa1kvY0rkgvg==',key_name='tempest-TestNetworkBasicOps-853741544',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:55:27Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-mfq37y5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:55:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=8e009e75-a97b-4c5d-a470-5db1137cb407,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.092 230014 DEBUG nova.network.os_vif_util [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.093 230014 DEBUG nova.network.os_vif_util [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.093 230014 DEBUG os_vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.094 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.095 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape962e27f-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.096 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.097 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.099 230014 INFO os_vif [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:f1:71,bridge_name='br-int',has_traffic_filtering=True,id=e962e27f-80bf-4103-98ae-d8af84c6fc28,network=Network(636fec29-e18e-45f1-aabc-369f5fd0d593),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape962e27f-80')#033[00m
Nov 24 04:56:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf-userdata-shm.mount: Deactivated successfully.
Nov 24 04:56:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay-d411d6d830a7518401f278920271906d8077c3d66c4ff88eb3a65c883bda73c0-merged.mount: Deactivated successfully.
Nov 24 04:56:09 np0005533252 podman[236411]: 2025-11-24 09:56:09.117748925 +0000 UTC m=+0.081853742 container cleanup 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 04:56:09 np0005533252 systemd[1]: libpod-conmon-2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf.scope: Deactivated successfully.
Nov 24 04:56:09 np0005533252 podman[236469]: 2025-11-24 09:56:09.18174519 +0000 UTC m=+0.044668054 container remove 2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.187 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[de71a360-f523-4776-9b55-5ea4dafd112b]: (4, ('Mon Nov 24 09:56:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 (2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf)\n2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf\nMon Nov 24 09:56:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 (2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf)\n2102cc4a70d7571ed66844281c81a17c454c21383f4d3318219469d58eddbcbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.189 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e8248f-8427-403a-87cc-822bffeb205d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.190 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap636fec29-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:09 np0005533252 kernel: tap636fec29-e0: left promiscuous mode
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.192 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.193 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.197 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb97ecc-e1f9-41cd-aae4-e666448b6678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.200 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.206 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.215 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9faa5f49-5c43-48b4-b786-7cf76c788699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.216 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d138ee04-b4d4-48ef-a821-6be533a76708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.231 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d7dc07a5-937d-4ff6-b84d-918ef71451ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407032, 'reachable_time': 18648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236488, 'error': None, 'target': 'ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 systemd[1]: run-netns-ovnmeta\x2d636fec29\x2de18e\x2d45f1\x2daabc\x2d369f5fd0d593.mount: Deactivated successfully.
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.234 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-636fec29-e18e-45f1-aabc-369f5fd0d593 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.234 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[99bfec1b-747c-427a-a04a-deda899f164f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:56:09 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:09.235 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.366 230014 DEBUG oslo_concurrency.lockutils [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.367 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.367 230014 DEBUG nova.compute.manager [req-f316f06d-aba4-41ad-a258-c175537ef3fb req-291b25d9-305e-4b29-a5fe-08ce3ef64740 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-unplugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 04:56:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.475 230014 INFO nova.virt.libvirt.driver [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deleting instance files /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407_del#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.475 230014 INFO nova.virt.libvirt.driver [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deletion of /var/lib/nova/instances/8e009e75-a97b-4c5d-a470-5db1137cb407_del complete#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 INFO nova.compute.manager [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 DEBUG oslo.service.loopingcall [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.536 230014 DEBUG nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 04:56:09 np0005533252 nova_compute[230010]: 2025-11-24 09:56:09.537 230014 DEBUG nova.network.neutron [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 04:56:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.528 230014 DEBUG nova.network.neutron [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.542 230014 INFO nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Took 1.01 seconds to deallocate network for instance.#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.570 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updated VIF entry in instance network info cache for port e962e27f-80bf-4103-98ae-d8af84c6fc28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.570 230014 DEBUG nova.network.neutron [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [{"id": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "address": "fa:16:3e:a4:f1:71", "network": {"id": "636fec29-e18e-45f1-aabc-369f5fd0d593", "bridge": "br-int", "label": "tempest-network-smoke--778674541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape962e27f-80", "ovs_interfaceid": "e962e27f-80bf-4103-98ae-d8af84c6fc28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.644 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.645 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.645 230014 DEBUG oslo_concurrency.lockutils [req-49bc461f-a8bb-4dff-8cfe-08d8b377f561 req-20fdecd2-0244-4b6a-a5d5-099475744c71 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-8e009e75-a97b-4c5d-a470-5db1137cb407" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:56:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.692 230014 DEBUG oslo_concurrency.processutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.778 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.854 230014 DEBUG nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-deleted-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.854 230014 INFO nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Neutron deleted interface e962e27f-80bf-4103-98ae-d8af84c6fc28; detaching it from the instance and deleting it from the info cache#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.855 230014 DEBUG nova.network.neutron [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:56:10 np0005533252 nova_compute[230010]: 2025-11-24 09:56:10.885 230014 DEBUG nova.compute.manager [req-768daf01-d8de-4440-be00-da7eaacf0f1f req-17e275f3-6728-4936-89ae-538f714f240e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Detach interface failed, port_id=e962e27f-80bf-4103-98ae-d8af84c6fc28, reason: Instance 8e009e75-a97b-4c5d-a470-5db1137cb407 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 24 04:56:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:56:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/785571714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.150 230014 DEBUG oslo_concurrency.processutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.157 230014 DEBUG nova.compute.provider_tree [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.169 230014 DEBUG nova.scheduler.client.report [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.190 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.212 230014 INFO nova.scheduler.client.report [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 8e009e75-a97b-4c5d-a470-5db1137cb407#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.266 230014 DEBUG oslo_concurrency.lockutils [None req-75711f6a-8e6a-4c6f-9b06-d9eaf423d2c7 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.435 230014 DEBUG nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.435 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG oslo_concurrency.lockutils [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "8e009e75-a97b-4c5d-a470-5db1137cb407-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 DEBUG nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] No waiting events found dispatching network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:56:11 np0005533252 nova_compute[230010]: 2025-11-24 09:56:11.436 230014 WARNING nova.compute.manager [req-3291dcb6-9cf0-4a72-bad6-99917c79e8a8 req-47a9f8ae-1d4a-4a63-aee9-1f82d9707ea8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Received unexpected event network-vif-plugged-e962e27f-80bf-4103-98ae-d8af84c6fc28 for instance with vm_state deleted and task_state None.#033[00m
Nov 24 04:56:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:13 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:13.237 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:56:13 np0005533252 podman[236514]: 2025-11-24 09:56:13.312105742 +0000 UTC m=+0.050568786 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 04:56:13 np0005533252 nova_compute[230010]: 2025-11-24 09:56:13.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.097 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:14 np0005533252 nova_compute[230010]: 2025-11-24 09:56:14.873 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:56:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.779 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.784 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:56:15 np0005533252 nova_compute[230010]: 2025-11-24 09:56:15.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:56:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:56:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645227929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.223 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.379 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4997MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.380 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.434 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.435 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.450 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:56:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:56:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:16.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:56:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:56:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3862106592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.903 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.908 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:56:16 np0005533252 nova_compute[230010]: 2025-11-24 09:56:16.923 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:56:17 np0005533252 nova_compute[230010]: 2025-11-24 09:56:17.119 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:56:17 np0005533252 nova_compute[230010]: 2025-11-24 09:56:17.119 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.120 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.120 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.121 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.144 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.146 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.146 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:19 np0005533252 nova_compute[230010]: 2025-11-24 09:56:19.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:56:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:56:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:56:20.057 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:56:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:20 np0005533252 nova_compute[230010]: 2025-11-24 09:56:20.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:56:20 np0005533252 nova_compute[230010]: 2025-11-24 09:56:20.781 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:24 np0005533252 nova_compute[230010]: 2025-11-24 09:56:24.076 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978169.0753462, 8e009e75-a97b-4c5d-a470-5db1137cb407 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:56:24 np0005533252 nova_compute[230010]: 2025-11-24 09:56:24.076 230014 INFO nova.compute.manager [-] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] VM Stopped (Lifecycle Event)#033[00m
Nov 24 04:56:24 np0005533252 nova_compute[230010]: 2025-11-24 09:56:24.093 230014 DEBUG nova.compute.manager [None req-c0548ac8-1ffd-4479-9a39-20974888a7d0 - - - - - -] [instance: 8e009e75-a97b-4c5d-a470-5db1137cb407] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:56:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:24 np0005533252 nova_compute[230010]: 2025-11-24 09:56:24.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:24.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:25 np0005533252 nova_compute[230010]: 2025-11-24 09:56:25.783 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:56:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:26.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:56:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:29 np0005533252 nova_compute[230010]: 2025-11-24 09:56:29.217 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:56:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:56:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:30 np0005533252 nova_compute[230010]: 2025-11-24 09:56:30.785 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:32 np0005533252 podman[236616]: 2025-11-24 09:56:32.3132287 +0000 UTC m=+0.053958340 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 04:56:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:56:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s#012Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 04:56:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:34 np0005533252 nova_compute[230010]: 2025-11-24 09:56:34.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:35 np0005533252 nova_compute[230010]: 2025-11-24 09:56:35.786 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:37 np0005533252 podman[236640]: 2025-11-24 09:56:37.340364328 +0000 UTC m=+0.078992191 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 04:56:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:39 np0005533252 nova_compute[230010]: 2025-11-24 09:56:39.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:40 np0005533252 nova_compute[230010]: 2025-11-24 09:56:40.788 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:44.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:44 np0005533252 nova_compute[230010]: 2025-11-24 09:56:44.261 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:44 np0005533252 podman[236670]: 2025-11-24 09:56:44.351709565 +0000 UTC m=+0.055892047 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 24 04:56:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:56:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:56:45 np0005533252 nova_compute[230010]: 2025-11-24 09:56:45.790 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:56:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:56:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:56:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:56:49 np0005533252 nova_compute[230010]: 2025-11-24 09:56:49.264 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:49 np0005533252 ovn_controller[132966]: 2025-11-24T09:56:49Z|00055|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 24 04:56:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:50.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:50 np0005533252 nova_compute[230010]: 2025-11-24 09:56:50.791 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:52.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:54.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:54 np0005533252 nova_compute[230010]: 2025-11-24 09:56:54.266 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:56:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:55 np0005533252 nova_compute[230010]: 2025-11-24 09:56:55.794 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:56:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:56.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.478530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216478606, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 251, "total_data_size": 6142089, "memory_usage": 6244928, "flush_reason": "Manual Compaction"}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216498679, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3984471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26069, "largest_seqno": 28422, "table_properties": {"data_size": 3975086, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19668, "raw_average_key_size": 20, "raw_value_size": 3956079, "raw_average_value_size": 4078, "num_data_blocks": 259, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978007, "oldest_key_time": 1763978007, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 20184 microseconds, and 7589 cpu microseconds.
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.498722) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3984471 bytes OK
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.498747) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500462) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500478) EVENT_LOG_v1 {"time_micros": 1763978216500474, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.500496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6131644, prev total WAL file size 6131644, number of live WAL files 2.
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3891KB)], [51(11MB)]
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216501945, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16531749, "oldest_snapshot_seqno": -1}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5851 keys, 14385643 bytes, temperature: kUnknown
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216576029, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14385643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14345801, "index_size": 24116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148831, "raw_average_key_size": 25, "raw_value_size": 14239365, "raw_average_value_size": 2433, "num_data_blocks": 982, "num_entries": 5851, "num_filter_entries": 5851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.576246) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14385643 bytes
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.577717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.9 rd, 194.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6371, records dropped: 520 output_compression: NoCompression
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.577734) EVENT_LOG_v1 {"time_micros": 1763978216577725, "job": 30, "event": "compaction_finished", "compaction_time_micros": 74159, "compaction_time_cpu_micros": 26558, "output_level": 6, "num_output_files": 1, "total_output_size": 14385643, "num_input_records": 6371, "num_output_records": 5851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216578633, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216580822, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:56:56.580891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:56:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:58.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:56:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:56:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:56:59 np0005533252 nova_compute[230010]: 2025-11-24 09:56:59.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:56:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:57:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:57:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:00.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:00 np0005533252 nova_compute[230010]: 2025-11-24 09:57:00.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.503 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.503 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.520 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.583 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.584 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.589 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.589 230014 INFO nova.compute.claims [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 04:57:02 np0005533252 nova_compute[230010]: 2025-11-24 09:57:02.682 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 04:57:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5342 writes, 28K keys, 5342 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5342 writes, 5342 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1501 writes, 7297 keys, 1501 commit groups, 1.0 writes per commit group, ingest: 16.91 MB, 0.03 MB/s#012Interval WAL: 1501 writes, 1501 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    139.7      0.31              0.10        15    0.021       0      0       0.0       0.0#012  L6      1/0   13.72 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    157.2    134.6      1.33              0.43        14    0.095     74K   7374       0.0       0.0#012 Sum      1/0   13.72 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    127.2    135.6      1.64              0.54        29    0.057     74K   7374       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    170.5    173.6      0.44              0.20        10    0.044     30K   2555       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    157.2    134.6      1.33              0.43        14    0.095     74K   7374       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    140.5      0.31              0.10        14    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.043, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 1.6 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 17.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000112 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(933,16.70 MB,5.49298%) FilterBlock(29,220.48 KB,0.0708279%) IndexBlock(29,390.20 KB,0.125348%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 04:57:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:57:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2841874782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.135 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.142 230014 DEBUG nova.compute.provider_tree [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.154 230014 DEBUG nova.scheduler.client.report [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.177 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.177 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.225 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.225 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.254 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.268 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 04:57:03 np0005533252 podman[236745]: 2025-11-24 09:57:03.334601688 +0000 UTC m=+0.078723976 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.369 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.371 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.371 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating image(s)#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.392 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.413 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.436 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.440 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.493 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.494 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.494 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.495 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.513 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.516 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 9558b085-fcfb-4cae-87bc-2840f81734fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.738 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 9558b085-fcfb-4cae-87bc-2840f81734fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:03 np0005533252 nova_compute[230010]: 2025-11-24 09:57:03.811 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.059 230014 DEBUG nova.objects.instance [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.074 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.074 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Ensure instance console log exists: /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.075 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.322 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:04 np0005533252 nova_compute[230010]: 2025-11-24 09:57:04.334 230014 DEBUG nova.policy [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:57:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:05 np0005533252 nova_compute[230010]: 2025-11-24 09:57:05.834 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:07 np0005533252 nova_compute[230010]: 2025-11-24 09:57:07.359 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Successfully created port: f43553d8-3872-4217-8259-57949e64eab2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:57:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:08 np0005533252 podman[236983]: 2025-11-24 09:57:08.220441383 +0000 UTC m=+0.080766305 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.405 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Successfully updated port: f43553d8-3872-4217-8259-57949e64eab2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.419 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:57:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:57:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG nova.compute.manager [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-changed-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG nova.compute.manager [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing instance network info cache due to event network-changed-f43553d8-3872-4217-8259-57949e64eab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.500 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:57:08 np0005533252 nova_compute[230010]: 2025-11-24 09:57:08.573 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 04:57:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.280101389 +0000 UTC m=+0.048635010 container create 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 24 04:57:09 np0005533252 systemd[1]: Started libpod-conmon-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope.
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:09 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.263226227 +0000 UTC m=+0.031759868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.36565102 +0000 UTC m=+0.134184651 container init 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.372284423 +0000 UTC m=+0.140818044 container start 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.375864301 +0000 UTC m=+0.144397922 container attach 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 04:57:09 np0005533252 heuristic_turing[237171]: 167 167
Nov 24 04:57:09 np0005533252 systemd[1]: libpod-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope: Deactivated successfully.
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.379287384 +0000 UTC m=+0.147821005 container died 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 04:57:09 np0005533252 systemd[1]: var-lib-containers-storage-overlay-99d761b0b0e6dc4f0334fcf7277b12378a45749a2f8f0117d1e4942328930411-merged.mount: Deactivated successfully.
Nov 24 04:57:09 np0005533252 podman[237155]: 2025-11-24 09:57:09.420199044 +0000 UTC m=+0.188732665 container remove 7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 04:57:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:09 np0005533252 systemd[1]: libpod-conmon-7619d2205037a3da0fe5165b70f086708197e9e8eeadef3db8665c3793b961b2.scope: Deactivated successfully.
Nov 24 04:57:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:09 np0005533252 podman[237194]: 2025-11-24 09:57:09.587052523 +0000 UTC m=+0.039714491 container create 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:57:09 np0005533252 systemd[1]: Started libpod-conmon-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope.
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.629 230014 DEBUG nova.network.neutron [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:57:09 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:57:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 04:57:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 04:57:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 04:57:09 np0005533252 podman[237194]: 2025-11-24 09:57:09.570745185 +0000 UTC m=+0.023407173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 04:57:09 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 04:57:09 np0005533252 podman[237194]: 2025-11-24 09:57:09.678737945 +0000 UTC m=+0.131399913 container init 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 04:57:09 np0005533252 podman[237194]: 2025-11-24 09:57:09.684818443 +0000 UTC m=+0.137480411 container start 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 04:57:09 np0005533252 podman[237194]: 2025-11-24 09:57:09.688140945 +0000 UTC m=+0.140802913 container attach 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.713 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.713 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance network_info: |[{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.714 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.714 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing network info cache for port f43553d8-3872-4217-8259-57949e64eab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.716 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start _get_guest_xml network_info=[{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.721 230014 WARNING nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.728 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.730 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.733 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.733 230014 DEBUG nova.virt.libvirt.host [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.734 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.735 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.736 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.737 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.737 230014 DEBUG nova.virt.hardware [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 04:57:09 np0005533252 nova_compute[230010]: 2025-11-24 09:57:09.739 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/146065392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.187 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.216 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.220 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]: [
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:    {
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "available": false,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "being_replaced": false,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "ceph_device_lvm": false,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "lsm_data": {},
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "lvs": [],
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "path": "/dev/sr0",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "rejected_reasons": [
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "Has a FileSystem",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "Insufficient space (<5GB)"
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        ],
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        "sys_api": {
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "actuators": null,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "device_nodes": [
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:                "sr0"
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            ],
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "devname": "sr0",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "human_readable_size": "482.00 KB",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "id_bus": "ata",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "model": "QEMU DVD-ROM",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "nr_requests": "2",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "parent": "/dev/sr0",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "partitions": {},
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "path": "/dev/sr0",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "removable": "1",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "rev": "2.5+",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "ro": "0",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "rotational": "1",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "sas_address": "",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "sas_device_handle": "",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "scheduler_mode": "mq-deadline",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "sectors": 0,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "sectorsize": "2048",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "size": 493568.0,
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "support_discard": "2048",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "type": "disk",
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:            "vendor": "QEMU"
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:        }
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]:    }
Nov 24 04:57:10 np0005533252 infallible_banzai[237210]: ]
Nov 24 04:57:10 np0005533252 systemd[1]: libpod-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope: Deactivated successfully.
Nov 24 04:57:10 np0005533252 podman[237194]: 2025-11-24 09:57:10.44996437 +0000 UTC m=+0.902626348 container died 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 04:57:10 np0005533252 systemd[1]: var-lib-containers-storage-overlay-1ffdc502dd416ef234a6305b4880d8546625fd84228039774c7b1469f11e0cd3-merged.mount: Deactivated successfully.
Nov 24 04:57:10 np0005533252 podman[237194]: 2025-11-24 09:57:10.499524591 +0000 UTC m=+0.952186559 container remove 1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:57:10 np0005533252 systemd[1]: libpod-conmon-1ee92c2dae0cc9561a8c5f88aa9ba2b13408f8dd25de9405d89910547b06a4ce.scope: Deactivated successfully.
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4164713245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.684 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.687 230014 DEBUG nova.virt.libvirt.vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.688 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.689 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.691 230014 DEBUG nova.objects.instance [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.707 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] End _get_guest_xml xml=<domain type="kvm">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <uuid>9558b085-fcfb-4cae-87bc-2840f81734fc</uuid>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <name>instance-00000005</name>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-826525135</nova:name>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 09:57:09</nova:creationTime>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <nova:port uuid="f43553d8-3872-4217-8259-57949e64eab2">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="serial">9558b085-fcfb-4cae-87bc-2840f81734fc</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="uuid">9558b085-fcfb-4cae-87bc-2840f81734fc</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/9558b085-fcfb-4cae-87bc-2840f81734fc_disk">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:58:35:61"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <target dev="tapf43553d8-38"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/console.log" append="off"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 04:57:10 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:57:10 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:57:10 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:57:10 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Preparing to wait for external event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.709 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.710 230014 DEBUG nova.virt.libvirt.vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.710 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.711 230014 DEBUG nova.network.os_vif_util [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.711 230014 DEBUG os_vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.712 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.716 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf43553d8-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.716 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf43553d8-38, col_values=(('external_ids', {'iface-id': 'f43553d8-3872-4217-8259-57949e64eab2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:35:61', 'vm-uuid': '9558b085-fcfb-4cae-87bc-2840f81734fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:10 np0005533252 NetworkManager[48870]: <info>  [1763978230.7631] manager: (tapf43553d8-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 24 04:57:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:57:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.762 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.767 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.769 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.771 230014 INFO os_vif [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38')#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.815 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:58:35:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.816 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Using config drive#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.838 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:10 np0005533252 nova_compute[230010]: 2025-11-24 09:57:10.843 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:57:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.053 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated VIF entry in instance network info cache for port f43553d8-3872-4217-8259-57949e64eab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.054 230014 DEBUG nova.network.neutron [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.071 230014 DEBUG oslo_concurrency.lockutils [req-5d756cf4-d91c-4882-a171-c87963f7da21 req-060e4b75-ef8e-433c-8600-55b18b6744c9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.428 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Creating config drive at /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.433 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtbhz4e3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:11 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.558 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtbhz4e3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.588 230014 DEBUG nova.storage.rbd_utils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.592 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.802 230014 DEBUG oslo_concurrency.processutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config 9558b085-fcfb-4cae-87bc-2840f81734fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.802 230014 INFO nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deleting local config drive /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc/disk.config because it was imported into RBD.#033[00m
Nov 24 04:57:11 np0005533252 kernel: tapf43553d8-38: entered promiscuous mode
Nov 24 04:57:11 np0005533252 NetworkManager[48870]: <info>  [1763978231.8624] manager: (tapf43553d8-38): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 24 04:57:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:11Z|00056|binding|INFO|Claiming lport f43553d8-3872-4217-8259-57949e64eab2 for this chassis.
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.898 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:11Z|00057|binding|INFO|f43553d8-3872-4217-8259-57949e64eab2: Claiming fa:16:3e:58:35:61 10.100.0.9
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.903 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:11 np0005533252 systemd-machined[193537]: New machine qemu-3-instance-00000005.
Nov 24 04:57:11 np0005533252 systemd-udevd[238591]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.935 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:35:61 10.100.0.9'], port_security=['fa:16:3e:58:35:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9558b085-fcfb-4cae-87bc-2840f81734fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9b8d67b-4e9e-4fdc-b23f-05b645f04725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=f43553d8-3872-4217-8259-57949e64eab2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.937 142336 INFO neutron.agent.ovn.metadata.agent [-] Port f43553d8-3872-4217-8259-57949e64eab2 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 bound to our chassis#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.938 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a54e00b-2ddf-4829-be22-9a556b586781#033[00m
Nov 24 04:57:11 np0005533252 NetworkManager[48870]: <info>  [1763978231.9430] device (tapf43553d8-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:57:11 np0005533252 NetworkManager[48870]: <info>  [1763978231.9440] device (tapf43553d8-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.953 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d35878-d497-4db5-9c72-c7568714ad30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.955 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a54e00b-21 in ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.957 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a54e00b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.957 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[52792cf5-48e5-4ccd-ae7b-923845e41e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.958 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b1becf-5c4a-42d7-a630-e5a917a782a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:11 np0005533252 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Nov 24 04:57:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:11Z|00058|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 ovn-installed in OVS
Nov 24 04:57:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:11Z|00059|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 up in Southbound
Nov 24 04:57:11 np0005533252 nova_compute[230010]: 2025-11-24 09:57:11.968 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.977 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9a655-98f5-4ce0-ad13-248515677316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:11 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:11.990 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[93c3e9b4-0cd7-489f-a9c3-7264110369a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.018 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e2f94-56c6-4e56-862b-4e1fbe1c456b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.025 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e058b5b3-b2c3-4379-aa91-a92462506f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 NetworkManager[48870]: <info>  [1763978232.0263] manager: (tap4a54e00b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.061 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[62c48810-ef2e-481a-8e32-ee9cd1df2add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.063 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bf1a49-49f0-42ff-a8bf-529e1312b7c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 NetworkManager[48870]: <info>  [1763978232.0847] device (tap4a54e00b-20): carrier: link connected
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.088 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[12632800-d427-4430-acf9-14e283cc9249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.104 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[863039b8-c035-4670-9fd8-4be93b7de05a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417563, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238625, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.121 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[8efab08a-5fdc-4142-a666-752a0c05e4fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:bdd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417563, 'tstamp': 417563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238627, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.137 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3f7fa4-c229-4a68-8dab-95d16e035f45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417563, 'reachable_time': 43698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238628, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.170 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4a993d-e010-4300-9277-c993e1f016a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.232 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[67d970c5-2408-484d-8c80-1685df302d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.234 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.235 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.235 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a54e00b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:12 np0005533252 kernel: tap4a54e00b-20: entered promiscuous mode
Nov 24 04:57:12 np0005533252 NetworkManager[48870]: <info>  [1763978232.2377] manager: (tap4a54e00b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.239 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.241 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a54e00b-20, col_values=(('external_ids', {'iface-id': '825c51a9-1ab7-4d33-9d7f-c9278b05a734'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:12 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:12Z|00060|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.244 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.245 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5efa4fcd-14a9-411d-925f-7bf919cc030c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.246 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:57:12 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:12.248 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'env', 'PROCESS_TAG=haproxy-4a54e00b-2ddf-4829-be22-9a556b586781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a54e00b-2ddf-4829-be22-9a556b586781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.256 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.594 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.593641, 9558b085-fcfb-4cae-87bc-2840f81734fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.594 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Started (Lifecycle Event)#033[00m
Nov 24 04:57:12 np0005533252 podman[238700]: 2025-11-24 09:57:12.613623816 +0000 UTC m=+0.047386780 container create 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 04:57:12 np0005533252 systemd[1]: Started libpod-conmon-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope.
Nov 24 04:57:12 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:57:12 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb21f40013aa0967f3b24322d4423a75e6cb1995a2ccd777bff1c3622c2fa39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:57:12 np0005533252 podman[238700]: 2025-11-24 09:57:12.59048204 +0000 UTC m=+0.024245024 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:57:12 np0005533252 podman[238700]: 2025-11-24 09:57:12.707238095 +0000 UTC m=+0.141001089 container init 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 04:57:12 np0005533252 podman[238700]: 2025-11-24 09:57:12.713648301 +0000 UTC m=+0.147411265 container start 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 04:57:12 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : New worker (238721) forked
Nov 24 04:57:12 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : Loading success.
Nov 24 04:57:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.879 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.882 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.5959396, 9558b085-fcfb-4cae-87bc-2840f81734fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.882 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Paused (Lifecycle Event)#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.899 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.903 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.911 230014 DEBUG nova.compute.manager [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.911 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG oslo_concurrency.lockutils [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.912 230014 DEBUG nova.compute.manager [req-cb40eb7e-94b4-4586-be58-b8bc6878f6ce req-5d324aab-4309-403b-8fc0-166684ef7d39 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Processing event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.913 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.917 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.920 230014 INFO nova.virt.libvirt.driver [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance spawned successfully.#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.920 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.926 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.927 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978232.9161274, 9558b085-fcfb-4cae-87bc-2840f81734fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.927 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Resumed (Lifecycle Event)#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.941 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.941 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.942 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.942 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.943 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.943 230014 DEBUG nova.virt.libvirt.driver [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.950 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.953 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:57:12 np0005533252 nova_compute[230010]: 2025-11-24 09:57:12.982 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:57:13 np0005533252 nova_compute[230010]: 2025-11-24 09:57:13.078 230014 INFO nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 9.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 04:57:13 np0005533252 nova_compute[230010]: 2025-11-24 09:57:13.078 230014 DEBUG nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:57:13 np0005533252 nova_compute[230010]: 2025-11-24 09:57:13.191 230014 INFO nova.compute.manager [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 10.63 seconds to build instance.#033[00m
Nov 24 04:57:13 np0005533252 nova_compute[230010]: 2025-11-24 09:57:13.207 230014 DEBUG oslo_concurrency.lockutils [None req-b6e43b8b-33fa-493d-9a63-307e139453a0 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:13 np0005533252 nova_compute[230010]: 2025-11-24 09:57:13.774 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:14 np0005533252 nova_compute[230010]: 2025-11-24 09:57:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.026 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 DEBUG oslo_concurrency.lockutils [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 DEBUG nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.027 230014 WARNING nova.compute.manager [req-5fcc3e46-a99d-4916-af34-b810bf7f28a8 req-dfd22fe3-b300-4f62-8276-18acb303d475 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received unexpected event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:57:15 np0005533252 podman[238731]: 2025-11-24 09:57:15.340323356 +0000 UTC m=+0.077010594 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 04:57:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:57:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:57:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:57:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.814 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:15 np0005533252 nova_compute[230010]: 2025-11-24 09:57:15.837 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:57:16 np0005533252 nova_compute[230010]: 2025-11-24 09:57:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.496 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:17 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:17Z|00061|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 04:57:17 np0005533252 NetworkManager[48870]: <info>  [1763978237.4997] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 24 04:57:17 np0005533252 NetworkManager[48870]: <info>  [1763978237.5007] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.527 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:17 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:17Z|00062|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.531 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:17 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:17.694 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:57:17 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:17.695 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.791 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.846 230014 DEBUG nova.compute.manager [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-changed-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG nova.compute.manager [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing instance network info cache due to event network-changed-f43553d8-3872-4217-8259-57949e64eab2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.847 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:57:17 np0005533252 nova_compute[230010]: 2025-11-24 09:57:17.848 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Refreshing network info cache for port f43553d8-3872-4217-8259-57949e64eab2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:57:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:57:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1778182928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.301 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.303 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.496 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.497 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4792MB free_disk=59.92180252075195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.498 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.499 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.689 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 9558b085-fcfb-4cae-87bc-2840f81734fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.690 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:57:18 np0005533252 nova_compute[230010]: 2025-11-24 09:57:18.691 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:57:18 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:18.697 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:18.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.179 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:57:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/65500581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.664 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.672 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.695 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.719 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.720 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.782 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.783 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:57:19 np0005533252 nova_compute[230010]: 2025-11-24 09:57:19.783 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:57:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.058 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:20 np0005533252 nova_compute[230010]: 2025-11-24 09:57:20.300 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:57:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:20 np0005533252 nova_compute[230010]: 2025-11-24 09:57:20.817 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:20 np0005533252 nova_compute[230010]: 2025-11-24 09:57:20.840 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.254 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated VIF entry in instance network info cache for port f43553d8-3872-4217-8259-57949e64eab2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.255 230014 DEBUG nova.network.neutron [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.295 230014 DEBUG oslo_concurrency.lockutils [req-72a3d723-7f40-4da5-bac9-336921243068 req-47187a4d-8e7b-4cf1-8a44-6198243b3467 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.296 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.296 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 24 04:57:21 np0005533252 nova_compute[230010]: 2025-11-24 09:57:21.297 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:57:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.451 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [{"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.463 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-9558b085-fcfb-4cae-87bc-2840f81734fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.463 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.464 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.465 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.475 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.476 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:23 np0005533252 nova_compute[230010]: 2025-11-24 09:57:23.476 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 04:57:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:25 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:25Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:35:61 10.100.0.9
Nov 24 04:57:25 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:25Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:35:61 10.100.0.9
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.842 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.845 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.867 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:25 np0005533252 nova_compute[230010]: 2025-11-24 09:57:25.868 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:57:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:26.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:26 np0005533252 nova_compute[230010]: 2025-11-24 09:57:26.478 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:57:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:28.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:57:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:57:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.868 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.870 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 04:57:30 np0005533252 nova_compute[230010]: 2025-11-24 09:57:30.872 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.310 230014 INFO nova.compute.manager [None req-267457df-5643-4c7d-a289-82a0bd31e250 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Get console output#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.316 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.562 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.563 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.563 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.564 230014 INFO nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Terminating instance#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.565 230014 DEBUG nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 04:57:32 np0005533252 kernel: tapf43553d8-38 (unregistering): left promiscuous mode
Nov 24 04:57:32 np0005533252 NetworkManager[48870]: <info>  [1763978252.6208] device (tapf43553d8-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 04:57:32 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:32Z|00063|binding|INFO|Releasing lport f43553d8-3872-4217-8259-57949e64eab2 from this chassis (sb_readonly=0)
Nov 24 04:57:32 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:32Z|00064|binding|INFO|Setting lport f43553d8-3872-4217-8259-57949e64eab2 down in Southbound
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.629 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 ovn_controller[132966]: 2025-11-24T09:57:32Z|00065|binding|INFO|Removing iface tapf43553d8-38 ovn-installed in OVS
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.637 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:35:61 10.100.0.9'], port_security=['fa:16:3e:58:35:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9558b085-fcfb-4cae-87bc-2840f81734fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9b8d67b-4e9e-4fdc-b23f-05b645f04725', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=f43553d8-3872-4217-8259-57949e64eab2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.638 142336 INFO neutron.agent.ovn.metadata.agent [-] Port f43553d8-3872-4217-8259-57949e64eab2 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 unbound from our chassis#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.639 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a54e00b-2ddf-4829-be22-9a556b586781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.641 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[35fc0bcf-b5d4-43ce-b91a-5d6c0999a640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.641 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace which is not needed anymore#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.659 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 24 04:57:32 np0005533252 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 13.852s CPU time.
Nov 24 04:57:32 np0005533252 systemd-machined[193537]: Machine qemu-3-instance-00000005 terminated.
Nov 24 04:57:32 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : haproxy version is 2.8.14-c23fe91
Nov 24 04:57:32 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [NOTICE]   (238719) : path to executable is /usr/sbin/haproxy
Nov 24 04:57:32 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [WARNING]  (238719) : Exiting Master process...
Nov 24 04:57:32 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [ALERT]    (238719) : Current worker (238721) exited with code 143 (Terminated)
Nov 24 04:57:32 np0005533252 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[238715]: [WARNING]  (238719) : All workers exited. Exiting... (0)
Nov 24 04:57:32 np0005533252 systemd[1]: libpod-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope: Deactivated successfully.
Nov 24 04:57:32 np0005533252 podman[238878]: 2025-11-24 09:57:32.786137162 +0000 UTC m=+0.047402869 container died 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 04:57:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.800 230014 INFO nova.virt.libvirt.driver [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Instance destroyed successfully.#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.802 230014 DEBUG nova.objects.instance [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 9558b085-fcfb-4cae-87bc-2840f81734fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.814 230014 DEBUG nova.virt.libvirt.vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-826525135',display_name='tempest-TestNetworkBasicOps-server-826525135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-826525135',id=5,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC35slzFXIscG7yI0ldCNK4vlvp0/JkuMYp+G9aKEuW9NB0+nlUoAY9//FD0F8qY2c6aehGz4dqJCwd0w9isq9P1Emwaoz7MA2BbTfYqIAVwl0HpYimM2CBxhvzKgVHsXQ==',key_name='tempest-TestNetworkBasicOps-569358808',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:57:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-epqclak3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:57:13Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=9558b085-fcfb-4cae-87bc-2840f81734fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.814 230014 DEBUG nova.network.os_vif_util [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "f43553d8-3872-4217-8259-57949e64eab2", "address": "fa:16:3e:58:35:61", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf43553d8-38", "ovs_interfaceid": "f43553d8-3872-4217-8259-57949e64eab2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.815 230014 DEBUG nova.network.os_vif_util [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.815 230014 DEBUG os_vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.819 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.819 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf43553d8-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.820 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.822 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8-userdata-shm.mount: Deactivated successfully.
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.825 230014 INFO os_vif [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:35:61,bridge_name='br-int',has_traffic_filtering=True,id=f43553d8-3872-4217-8259-57949e64eab2,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf43553d8-38')#033[00m
Nov 24 04:57:32 np0005533252 systemd[1]: var-lib-containers-storage-overlay-cdb21f40013aa0967f3b24322d4423a75e6cb1995a2ccd777bff1c3622c2fa39-merged.mount: Deactivated successfully.
Nov 24 04:57:32 np0005533252 podman[238878]: 2025-11-24 09:57:32.841158078 +0000 UTC m=+0.102423765 container cleanup 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:57:32 np0005533252 systemd[1]: libpod-conmon-55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8.scope: Deactivated successfully.
Nov 24 04:57:32 np0005533252 podman[238933]: 2025-11-24 09:57:32.905350788 +0000 UTC m=+0.041102517 container remove 55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.911 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[454b0868-3e98-4eb3-813d-a36b96988214]: (4, ('Mon Nov 24 09:57:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8)\n55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8\nMon Nov 24 09:57:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8)\n55916cd0a69dc07af9d16de5c9afdb86b7d0fe881080057550e3c49be4fd83d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.913 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2d118c-f3b0-4fb9-a089-1e259df4dfaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.914 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.916 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 kernel: tap4a54e00b-20: left promiscuous mode
Nov 24 04:57:32 np0005533252 nova_compute[230010]: 2025-11-24 09:57:32.928 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.930 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[abd002cd-1e6d-4621-a994-26bcd449a770]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.946 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1048c602-e26a-4233-a62e-63badce1a238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.948 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2ea96-30a3-441c-92e2-f09da9bb2c42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.965 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3a54b2-6987-478b-a58e-de8976c8900d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417556, 'reachable_time': 27452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238949, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.969 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 04:57:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:57:32.969 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a8fbca-85de-4927-8975-64c1dd1ca98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:57:32 np0005533252 systemd[1]: run-netns-ovnmeta\x2d4a54e00b\x2d2ddf\x2d4829\x2dbe22\x2d9a556b586781.mount: Deactivated successfully.
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.225 230014 INFO nova.virt.libvirt.driver [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deleting instance files /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc_del#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.226 230014 INFO nova.virt.libvirt.driver [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deletion of /var/lib/nova/instances/9558b085-fcfb-4cae-87bc-2840f81734fc_del complete#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.291 230014 INFO nova.compute.manager [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG oslo.service.loopingcall [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.292 230014 DEBUG nova.network.neutron [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.560 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.561 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.562 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.562 230014 DEBUG oslo_concurrency.lockutils [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.563 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:57:33 np0005533252 nova_compute[230010]: 2025-11-24 09:57:33.563 230014 DEBUG nova.compute.manager [req-f0f76990-9ff0-4d31-b21e-0bae51786a40 req-4a7cf2c3-f231-4ef0-bcb5-b466ec780f05 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-unplugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 04:57:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:34 np0005533252 podman[238952]: 2025-11-24 09:57:34.346315895 +0000 UTC m=+0.074568984 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 24 04:57:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:34 np0005533252 nova_compute[230010]: 2025-11-24 09:57:34.889 230014 DEBUG nova.network.neutron [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:57:34 np0005533252 nova_compute[230010]: 2025-11-24 09:57:34.911 230014 INFO nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 24 04:57:34 np0005533252 nova_compute[230010]: 2025-11-24 09:57:34.961 230014 DEBUG nova.compute.manager [req-52808065-8a09-495a-aee6-673c221df7cb req-4db2183a-538a-4da4-9e2b-205aa7f43c38 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-deleted-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:34 np0005533252 nova_compute[230010]: 2025-11-24 09:57:34.987 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:34 np0005533252 nova_compute[230010]: 2025-11-24 09:57:34.987 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.318 230014 DEBUG oslo_concurrency.processutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.636 230014 DEBUG nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.636 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG oslo_concurrency.lockutils [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 DEBUG nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] No waiting events found dispatching network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.637 230014 WARNING nova.compute.manager [req-1527bd92-d70f-41b5-ad20-afbaaaff21d5 req-701e958c-38a3-4309-bdee-f53328c6d99b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Received unexpected event network-vif-plugged-f43553d8-3872-4217-8259-57949e64eab2 for instance with vm_state deleted and task_state None.#033[00m
Nov 24 04:57:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:57:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3443165567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.746 230014 DEBUG oslo_concurrency.processutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.751 230014 DEBUG nova.compute.provider_tree [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.767 230014 DEBUG nova.scheduler.client.report [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.791 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.822 230014 INFO nova.scheduler.client.report [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 9558b085-fcfb-4cae-87bc-2840f81734fc#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.872 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:35 np0005533252 nova_compute[230010]: 2025-11-24 09:57:35.893 230014 DEBUG oslo_concurrency.lockutils [None req-52b7c15e-27ac-4807-a58c-45eab80a1709 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "9558b085-fcfb-4cae-87bc-2840f81734fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:37 np0005533252 nova_compute[230010]: 2025-11-24 09:57:37.875 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:38 np0005533252 podman[238997]: 2025-11-24 09:57:38.356146326 +0000 UTC m=+0.098821958 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 04:57:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:40 np0005533252 nova_compute[230010]: 2025-11-24 09:57:40.875 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:42 np0005533252 nova_compute[230010]: 2025-11-24 09:57:42.878 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:42 np0005533252 nova_compute[230010]: 2025-11-24 09:57:42.959 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:43 np0005533252 nova_compute[230010]: 2025-11-24 09:57:43.032 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:57:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:57:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:57:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:57:45 np0005533252 nova_compute[230010]: 2025-11-24 09:57:45.878 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:46.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:46 np0005533252 podman[239029]: 2025-11-24 09:57:46.353443878 +0000 UTC m=+0.082162129 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:57:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:46.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:47 np0005533252 nova_compute[230010]: 2025-11-24 09:57:47.798 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978252.7965646, 9558b085-fcfb-4cae-87bc-2840f81734fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:57:47 np0005533252 nova_compute[230010]: 2025-11-24 09:57:47.799 230014 INFO nova.compute.manager [-] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] VM Stopped (Lifecycle Event)#033[00m
Nov 24 04:57:47 np0005533252 nova_compute[230010]: 2025-11-24 09:57:47.817 230014 DEBUG nova.compute.manager [None req-9c4d198b-61bb-4163-9c0a-72987176e301 - - - - - -] [instance: 9558b085-fcfb-4cae-87bc-2840f81734fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:57:47 np0005533252 nova_compute[230010]: 2025-11-24 09:57:47.881 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:48.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:50.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:50 np0005533252 nova_compute[230010]: 2025-11-24 09:57:50.904 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:52.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:52 np0005533252 nova_compute[230010]: 2025-11-24 09:57:52.885 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:54.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:55 np0005533252 nova_compute[230010]: 2025-11-24 09:57:55.908 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:57:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.183 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.184 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.196 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.255 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.255 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.263 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.263 230014 INFO nova.compute.claims [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.350 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:57:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4049579656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.818 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.823 230014 DEBUG nova.compute.provider_tree [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.836 230014 DEBUG nova.scheduler.client.report [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.852 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.853 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.888 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.900 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.900 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.918 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 04:57:57 np0005533252 nova_compute[230010]: 2025-11-24 09:57:57.939 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.063 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.066 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.066 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating image(s)#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.091 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.117 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.143 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.147 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.234 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.235 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.236 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.236 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.259 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.263 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.384 230014 DEBUG nova.policy [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.519 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.589 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.684 230014 DEBUG nova.objects.instance [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.695 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Ensure instance console log exists: /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.696 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.697 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:57:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:57:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:57:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:57:58 np0005533252 nova_compute[230010]: 2025-11-24 09:57:58.940 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully created port: bf41c673-482b-42e3-ac98-475b716fa0e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:57:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.875 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully updated port: bf41c673-482b-42e3-ac98-475b716fa0e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.888 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.888 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.889 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.952 230014 DEBUG nova.compute.manager [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.952 230014 DEBUG nova.compute.manager [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:57:59 np0005533252 nova_compute[230010]: 2025-11-24 09:57:59.953 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:58:00 np0005533252 nova_compute[230010]: 2025-11-24 09:58:00.010 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 04:58:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:58:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:58:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:00 np0005533252 nova_compute[230010]: 2025-11-24 09:58:00.945 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.612 230014 DEBUG nova.network.neutron [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.630 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.631 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance network_info: |[{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.631 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.632 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.635 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start _get_guest_xml network_info=[{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.641 230014 WARNING nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.646 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.647 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.652 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.libvirt.host [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.653 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.654 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.655 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.656 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.656 230014 DEBUG nova.virt.hardware [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.658 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:02.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:02 np0005533252 nova_compute[230010]: 2025-11-24 09:58:02.892 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:58:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4014279327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.102 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.130 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.134 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:03 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 04:58:03 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/625293257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.561 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.563 230014 DEBUG nova.virt.libvirt.vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:57Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.563 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.564 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.565 230014 DEBUG nova.objects.instance [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.581 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] End _get_guest_xml xml=<domain type="kvm">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <name>instance-00000006</name>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 09:58:02</nova:creationTime>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="serial">62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="uuid">62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:99:a7:ce"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <target dev="tapbf41c673-48"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log" append="off"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 04:58:03 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:58:03 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:58:03 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:58:03 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Preparing to wait for external event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.583 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.584 230014 DEBUG nova.virt.libvirt.vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:57:57Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.584 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.585 230014 DEBUG nova.network.os_vif_util [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.585 230014 DEBUG os_vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.586 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.588 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.589 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf41c673-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.589 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf41c673-48, col_values=(('external_ids', {'iface-id': 'bf41c673-482b-42e3-ac98-475b716fa0e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:a7:ce', 'vm-uuid': '62465e3c-a372-4121-8a2e-5e10d1c3faf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:03 np0005533252 NetworkManager[48870]: <info>  [1763978283.5917] manager: (tapbf41c673-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.592 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.596 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.597 230014 INFO os_vif [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48')#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.651 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.652 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.652 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:99:a7:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.653 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Using config drive#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.680 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.896 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.897 230014 DEBUG nova.network.neutron [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.912 230014 DEBUG oslo_concurrency.lockutils [req-d08486ba-a9f0-42e9-96e5-b658263b9f30 req-ea013464-d611-4964-b567-14d3c19cb126 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.987 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Creating config drive at /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config#033[00m
Nov 24 04:58:03 np0005533252 nova_compute[230010]: 2025-11-24 09:58:03.993 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6pcbvzx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.127 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6pcbvzx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.159 230014 DEBUG nova.storage.rbd_utils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.164 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.340 230014 DEBUG oslo_concurrency.processutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config 62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.341 230014 INFO nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deleting local config drive /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/disk.config because it was imported into RBD.#033[00m
Nov 24 04:58:04 np0005533252 kernel: tapbf41c673-48: entered promiscuous mode
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.3822] manager: (tapbf41c673-48): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 24 04:58:04 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:04Z|00066|binding|INFO|Claiming lport bf41c673-482b-42e3-ac98-475b716fa0e9 for this chassis.
Nov 24 04:58:04 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:04Z|00067|binding|INFO|bf41c673-482b-42e3-ac98-475b716fa0e9: Claiming fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.436 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.442 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.451 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:a7:ce 10.100.0.8'], port_security=['fa:16:3e:99:a7:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81f18750-9169-4587-b6ca-88a2bbc58afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebde3e26-b896-444f-b8ef-f2f39010ba47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42f28b30-955e-4ea5-b415-d62763a6e220, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=bf41c673-482b-42e3-ac98-475b716fa0e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.452 142336 INFO neutron.agent.ovn.metadata.agent [-] Port bf41c673-482b-42e3-ac98-475b716fa0e9 in datapath 81f18750-9169-4587-b6ca-88a2bbc58afc bound to our chassis#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.453 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81f18750-9169-4587-b6ca-88a2bbc58afc#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.465 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[487e462d-dc3f-473b-91b9-e5580eacdad0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.466 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81f18750-91 in ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.468 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81f18750-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.468 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[609a74b1-009a-41b5-8cc0-2d7d6c72985c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.469 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[98e90ae9-e8f2-48c0-a394-91b4584b3700]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 systemd-machined[193537]: New machine qemu-4-instance-00000006.
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.482 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4001d2-1a23-432d-8a9a-fb685f9cb416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:04Z|00068|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 ovn-installed in OVS
Nov 24 04:58:04 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:04Z|00069|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 up in Southbound
Nov 24 04:58:04 np0005533252 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.501 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.507 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc58bdc-5c92-48d7-8d34-365bb9efb86c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 systemd-udevd[239423]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.5264] device (tapbf41c673-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.5277] device (tapbf41c673-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:58:04 np0005533252 podman[239402]: 2025-11-24 09:58:04.538345294 +0000 UTC m=+0.068067315 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.538 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[e815618a-94be-4d9a-865c-dc35136217a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.5441] manager: (tap81f18750-90): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.545 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1a38550e-f539-49e2-ad08-d4f17679f96e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.576 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[92efd3df-26e2-4b77-96c7-a3c14b510e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.579 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[5039a9d5-d26e-48c5-b339-789d203d9b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.5996] device (tap81f18750-90): carrier: link connected
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.605 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c6b9fd-5f16-4e54-bfb7-09701a2263b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.620 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[94038882-12a9-4f97-99c7-6d8da8f42dea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81f18750-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:24:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422815, 'reachable_time': 18725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239456, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.633 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f00552f6-1557-449e-bd18-340712e4ab53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:24db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422815, 'tstamp': 422815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239457, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.647 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[93726689-2844-472d-95ee-b26495b13a99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81f18750-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:24:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422815, 'reachable_time': 18725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239458, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.670 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[483b5e09-0540-4318-b200-c11bc847f1bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.728 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[01d26195-b418-46ea-b93f-fe2d3813f9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.729 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81f18750-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.730 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.730 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81f18750-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:04 np0005533252 kernel: tap81f18750-90: entered promiscuous mode
Nov 24 04:58:04 np0005533252 NetworkManager[48870]: <info>  [1763978284.7328] manager: (tap81f18750-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.734 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.737 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81f18750-90, col_values=(('external_ids', {'iface-id': '51ab5aa5-77bf-4bb7-993e-d15c7b4540ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:04 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:04Z|00070|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 nova_compute[230010]: 2025-11-24 09:58:04.767 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.767 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.768 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b74c9a-3f22-4456-ac9c-605f898b2304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.769 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-81f18750-9169-4587-b6ca-88a2bbc58afc
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/81f18750-9169-4587-b6ca-88a2bbc58afc.pid.haproxy
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 81f18750-9169-4587-b6ca-88a2bbc58afc
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:58:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:04.770 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'env', 'PROCESS_TAG=haproxy-81f18750-9169-4587-b6ca-88a2bbc58afc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81f18750-9169-4587-b6ca-88a2bbc58afc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:58:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:04.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.068 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.067941, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.069 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Started (Lifecycle Event)#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.095 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.099 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.068107, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.099 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Paused (Lifecycle Event)#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.114 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.117 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:58:05 np0005533252 podman[239532]: 2025-11-24 09:58:05.137486691 +0000 UTC m=+0.050309831 container create 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.139 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:58:05 np0005533252 systemd[1]: Started libpod-conmon-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope.
Nov 24 04:58:05 np0005533252 podman[239532]: 2025-11-24 09:58:05.111222619 +0000 UTC m=+0.024045779 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:58:05 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:58:05 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8077583c93206f4b50fb98a5f2ccb3fea2a970b30dff429250e8ff4a1f0a34dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:58:05 np0005533252 podman[239532]: 2025-11-24 09:58:05.230301591 +0000 UTC m=+0.143124751 container init 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 04:58:05 np0005533252 podman[239532]: 2025-11-24 09:58:05.235362164 +0000 UTC m=+0.148185304 container start 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 04:58:05 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : New worker (239554) forked
Nov 24 04:58:05 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : Loading success.
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.514 230014 DEBUG nova.compute.manager [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG oslo_concurrency.lockutils [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.515 230014 DEBUG nova.compute.manager [req-1c718bb5-4cb2-4be2-9fe3-45fbcf486c3f req-d58c3083-3677-4362-b737-56f9f296dd16 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Processing event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.516 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.520 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978285.519798, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.520 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Resumed (Lifecycle Event)#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.522 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.526 230014 INFO nova.virt.libvirt.driver [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance spawned successfully.#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.526 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.541 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.546 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.548 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.549 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.549 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.550 230014 DEBUG nova.virt.libvirt.driver [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.571 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.597 230014 INFO nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 7.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.597 230014 DEBUG nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.651 230014 INFO nova.compute.manager [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 8.42 seconds to build instance.#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.664 230014 DEBUG oslo_concurrency.lockutils [None req-5b0369d5-0306-4be2-b5b8-b16b5d898411 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:05 np0005533252 nova_compute[230010]: 2025-11-24 09:58:05.947 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:06 np0005533252 nova_compute[230010]: 2025-11-24 09:58:06.163 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:06 np0005533252 nova_compute[230010]: 2025-11-24 09:58:06.188 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Triggering sync for uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 24 04:58:06 np0005533252 nova_compute[230010]: 2025-11-24 09:58:06.189 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:06 np0005533252 nova_compute[230010]: 2025-11-24 09:58:06.190 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:06 np0005533252 nova_compute[230010]: 2025-11-24 09:58:06.230 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.590 230014 DEBUG nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.591 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG oslo_concurrency.lockutils [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.592 230014 DEBUG nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:58:07 np0005533252 nova_compute[230010]: 2025-11-24 09:58:07.593 230014 WARNING nova.compute.manager [req-14bf6bcd-acf3-446a-bd36-987f7a7b2276 req-32ca1e3b-127e-4c3a-9e58-985ba621bdec 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with vm_state active and task_state None.#033[00m
Nov 24 04:58:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:08.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:08 np0005533252 nova_compute[230010]: 2025-11-24 09:58:08.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:09 np0005533252 podman[239590]: 2025-11-24 09:58:09.413794416 +0000 UTC m=+0.134128439 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 04:58:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:10 np0005533252 nova_compute[230010]: 2025-11-24 09:58:10.950 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:11 np0005533252 NetworkManager[48870]: <info>  [1763978291.4006] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.399 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:11 np0005533252 NetworkManager[48870]: <info>  [1763978291.4016] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 24 04:58:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:11Z|00071|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 04:58:11 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:11Z|00072|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.433 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.438 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.815 230014 DEBUG nova.compute.manager [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.815 230014 DEBUG nova.compute.manager [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:58:11 np0005533252 nova_compute[230010]: 2025-11-24 09:58:11.816 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:58:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:12 np0005533252 nova_compute[230010]: 2025-11-24 09:58:12.829 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:58:12 np0005533252 nova_compute[230010]: 2025-11-24 09:58:12.829 230014 DEBUG nova.network.neutron [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:12 np0005533252 nova_compute[230010]: 2025-11-24 09:58:12.846 230014 DEBUG oslo_concurrency.lockutils [req-09924f42-779e-4c6e-a5fd-6e6bdfdd7a36 req-eb0c8d47-516b-4ae9-a3d4-ab4367a2fc76 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:13 np0005533252 nova_compute[230010]: 2025-11-24 09:58:13.592 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:13 np0005533252 nova_compute[230010]: 2025-11-24 09:58:13.791 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:14.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:14 np0005533252 nova_compute[230010]: 2025-11-24 09:58:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:14.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:58:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:58:15 np0005533252 nova_compute[230010]: 2025-11-24 09:58:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:15 np0005533252 nova_compute[230010]: 2025-11-24 09:58:15.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:58:15 np0005533252 nova_compute[230010]: 2025-11-24 09:58:15.953 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 04:58:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 04:58:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:16.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:58:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:58:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:17 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:17 np0005533252 podman[239770]: 2025-11-24 09:58:17.353762338 +0000 UTC m=+0.090029772 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:58:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:58:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.593 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.784 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:58:18 np0005533252 nova_compute[230010]: 2025-11-24 09:58:18.784 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2462380472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.238 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.301 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.435 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4791MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.436 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:19 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 04:58:19 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:a7:ce 10.100.0.8
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.499 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.500 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.500 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.531 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:58:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874872579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.976 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.981 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:58:19 np0005533252 nova_compute[230010]: 2025-11-24 09:58:19.994 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:58:20 np0005533252 nova_compute[230010]: 2025-11-24 09:58:20.010 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:58:20 np0005533252 nova_compute[230010]: 2025-11-24 09:58:20.010 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.059 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.060 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:20.061 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:20 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:58:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:20 np0005533252 nova_compute[230010]: 2025-11-24 09:58:20.954 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.010 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.011 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.011 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.177 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.177 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.178 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 24 04:58:21 np0005533252 nova_compute[230010]: 2025-11-24 09:58:21.178 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:58:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:22 np0005533252 nova_compute[230010]: 2025-11-24 09:58:22.476 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:22 np0005533252 nova_compute[230010]: 2025-11-24 09:58:22.495 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:22 np0005533252 nova_compute[230010]: 2025-11-24 09:58:22.496 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 24 04:58:22 np0005533252 nova_compute[230010]: 2025-11-24 09:58:22.497 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:58:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:58:23 np0005533252 nova_compute[230010]: 2025-11-24 09:58:23.594 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:24 np0005533252 nova_compute[230010]: 2025-11-24 09:58:24.247 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:58:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:58:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:25 np0005533252 nova_compute[230010]: 2025-11-24 09:58:25.956 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:26 np0005533252 nova_compute[230010]: 2025-11-24 09:58:26.408 230014 INFO nova.compute.manager [None req-c1f4acb6-62ec-48ba-8539-f5a89e8c8956 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Get console output#033[00m
Nov 24 04:58:26 np0005533252 nova_compute[230010]: 2025-11-24 09:58:26.413 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 04:58:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:28.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:28 np0005533252 nova_compute[230010]: 2025-11-24 09:58:28.595 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:28.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:58:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:58:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:30 np0005533252 nova_compute[230010]: 2025-11-24 09:58:30.994 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.306 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.307 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.307 230014 DEBUG nova.objects.instance [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.622 230014 DEBUG nova.objects.instance [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.634 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 04:58:31 np0005533252 nova_compute[230010]: 2025-11-24 09:58:31.754 230014 DEBUG nova.policy [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 04:58:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:32.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:32 np0005533252 nova_compute[230010]: 2025-11-24 09:58:32.374 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully created port: 2ad41fbf-b749-4394-9d14-483c127ff44c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 04:58:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.568 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Successfully updated port: 2ad41fbf-b749-4394-9d14-483c127ff44c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.583 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.584 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.584 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.599 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.686 230014 DEBUG nova.compute.manager [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.687 230014 DEBUG nova.compute.manager [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:58:33 np0005533252 nova_compute[230010]: 2025-11-24 09:58:33.687 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:58:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:58:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:34.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:58:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:35 np0005533252 podman[239894]: 2025-11-24 09:58:35.338863909 +0000 UTC m=+0.080771765 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.559 230014 DEBUG nova.network.neutron [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.575 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.576 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.576 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.579 230014 DEBUG nova.virt.libvirt.vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.579 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.580 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.580 230014 DEBUG os_vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.581 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.584 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.584 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad41fbf-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.585 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad41fbf-b7, col_values=(('external_ids', {'iface-id': '2ad41fbf-b749-4394-9d14-483c127ff44c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:72:0f', 'vm-uuid': '62465e3c-a372-4121-8a2e-5e10d1c3faf6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.5875] manager: (tap2ad41fbf-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.588 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.594 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.595 230014 INFO os_vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.596 230014 DEBUG nova.virt.libvirt.vif [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.596 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.597 230014 DEBUG nova.network.os_vif_util [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.599 230014 DEBUG nova.virt.libvirt.guest [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] attach device xml: <interface type="ethernet">
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:df:72:0f"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <target dev="tap2ad41fbf-b7"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:58:35 np0005533252 nova_compute[230010]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.6087] manager: (tap2ad41fbf-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Nov 24 04:58:35 np0005533252 kernel: tap2ad41fbf-b7: entered promiscuous mode
Nov 24 04:58:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:35Z|00073|binding|INFO|Claiming lport 2ad41fbf-b749-4394-9d14-483c127ff44c for this chassis.
Nov 24 04:58:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:35Z|00074|binding|INFO|2ad41fbf-b749-4394-9d14-483c127ff44c: Claiming fa:16:3e:df:72:0f 10.100.0.24
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.611 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.622 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:72:0f 10.100.0.24'], port_security=['fa:16:3e:df:72:0f 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=2ad41fbf-b749-4394-9d14-483c127ff44c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.624 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad41fbf-b749-4394-9d14-483c127ff44c in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d bound to our chassis#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.625 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbb18554-4df6-4004-8b94-6d2a9b50722d#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.636 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[44eea528-b777-4d6f-af78-f4f089df7926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.636 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbb18554-41 in ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.638 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbb18554-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.638 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[a938a700-ce98-4f86-b8ab-3b53d40549b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.639 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[454744c3-9dee-4e17-bd96-afb59c96b927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 systemd-udevd[239921]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.6519] device (tap2ad41fbf-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.6531] device (tap2ad41fbf-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.654 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[a590b0c5-c724-462e-b73f-5ac3f8d2914d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.656 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:35Z|00075|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c ovn-installed in OVS
Nov 24 04:58:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:35Z|00076|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c up in Southbound
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.659 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.677 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ad55ad39-9b9d-4e46-9d7f-5c2fce5777cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.699 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:99:a7:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.700 230014 DEBUG nova.virt.libvirt.driver [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:df:72:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.701 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e685d-0e5c-4656-a172-d1ea46740a4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.706 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d3782be3-91e5-4c54-b019-bbc4dab256ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.7080] manager: (tapcbb18554-40): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.722 230014 DEBUG nova.virt.libvirt.guest [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 04:58:35 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 04:58:35 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:58:35 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:58:35 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:58:35 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.733 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b8d3a6-a6fe-4fd6-83ef-23a5ca0c20b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.736 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5c1966-beab-4400-a7b4-e1ccb63221ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.744 230014 DEBUG oslo_concurrency.lockutils [None req-340367ba-9f04-4816-8f4a-00e1efdea268 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.7545] device (tapcbb18554-40): carrier: link connected
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.759 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[33e8423c-d1c9-47bc-9f13-034411185da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.781 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[090d75c6-2dd5-491c-8368-a7b0a78ce59c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425930, 'reachable_time': 25772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239947, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.795 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f29711b4-d554-43f3-87c8-2f5c63b9fa02]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:d482'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425930, 'tstamp': 425930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239948, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.810 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d0347af3-acc5-4183-b49e-738e8df6a4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425930, 'reachable_time': 25772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239949, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.838 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6089f1-3bb7-4dfd-a081-4af247d0ae34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.867 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 DEBUG oslo_concurrency.lockutils [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 DEBUG nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.868 230014 WARNING nova.compute.manager [req-ffca368c-851f-4104-8641-98b4647f35a2 req-0400abf1-0913-4831-8c8e-a51887ef1ba0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.906 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[646460e5-41bd-4a98-8f25-9615db463c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.908 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbb18554-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 kernel: tapcbb18554-40: entered promiscuous mode
Nov 24 04:58:35 np0005533252 NetworkManager[48870]: <info>  [1763978315.9114] manager: (tapcbb18554-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.911 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.913 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.914 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbb18554-40, col_values=(('external_ids', {'iface-id': '7477e0b1-7d3c-42ae-9333-aaa2b41f75a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.915 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:35Z|00077|binding|INFO|Releasing lport 7477e0b1-7d3c-42ae-9333-aaa2b41f75a9 from this chassis (sb_readonly=0)
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.930 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.931 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.932 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[211e0df8-ddd9-4c9b-bda8-907289c1b345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.933 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 04:58:35 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:35.933 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'env', 'PROCESS_TAG=haproxy-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbb18554-4df6-4004-8b94-6d2a9b50722d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 04:58:35 np0005533252 nova_compute[230010]: 2025-11-24 09:58:35.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:36 np0005533252 nova_compute[230010]: 2025-11-24 09:58:36.050 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:36 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:36.053 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:58:36 np0005533252 podman[239982]: 2025-11-24 09:58:36.271554912 +0000 UTC m=+0.046453318 container create bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 04:58:36 np0005533252 systemd[1]: Started libpod-conmon-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope.
Nov 24 04:58:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:36.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:36 np0005533252 systemd[1]: Started libcrun container.
Nov 24 04:58:36 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447107754eda77794034edf91920a06d35d4d1b91593ad5057e2c61b459718a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 04:58:36 np0005533252 podman[239982]: 2025-11-24 09:58:36.24774428 +0000 UTC m=+0.022642706 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 04:58:36 np0005533252 podman[239982]: 2025-11-24 09:58:36.343702815 +0000 UTC m=+0.118601241 container init bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 04:58:36 np0005533252 podman[239982]: 2025-11-24 09:58:36.348914383 +0000 UTC m=+0.123812789 container start bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 04:58:36 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : New worker (240004) forked
Nov 24 04:58:36 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : Loading success.
Nov 24 04:58:36 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:36.414 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:58:36 np0005533252 nova_compute[230010]: 2025-11-24 09:58:36.662 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:58:36 np0005533252 nova_compute[230010]: 2025-11-24 09:58:36.663 230014 DEBUG nova.network.neutron [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:58:36 np0005533252 nova_compute[230010]: 2025-11-24 09:58:36.676 230014 DEBUG oslo_concurrency.lockutils [req-a6a3c25e-5102-4c48-9698-a411f3473fbf req-9c51d3c0-2c92-4321-b9c5-04de45187497 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:58:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:37 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:37Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:72:0f 10.100.0.24
Nov 24 04:58:37 np0005533252 ovn_controller[132966]: 2025-11-24T09:58:37Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:72:0f 10.100.0.24
Nov 24 04:58:37 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:58:37.416 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.962 230014 DEBUG nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.963 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.963 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.964 230014 DEBUG oslo_concurrency.lockutils [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.964 230014 DEBUG nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 04:58:37 np0005533252 nova_compute[230010]: 2025-11-24 09:58:37.965 230014 WARNING nova.compute.manager [req-96e7c529-9878-4501-9a97-c4dbe0066426 req-58153fdf-b208-4923-ba6d-c5121c6d1d04 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.#033[00m
Nov 24 04:58:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:38.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:40.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:40 np0005533252 podman[240015]: 2025-11-24 09:58:40.367187759 +0000 UTC m=+0.098321224 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 04:58:40 np0005533252 nova_compute[230010]: 2025-11-24 09:58:40.586 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:40 np0005533252 nova_compute[230010]: 2025-11-24 09:58:40.996 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:42.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:44.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:58:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:58:45 np0005533252 nova_compute[230010]: 2025-11-24 09:58:45.590 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:45 np0005533252 nova_compute[230010]: 2025-11-24 09:58:45.998 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:46.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:58:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:58:48 np0005533252 podman[240071]: 2025-11-24 09:58:48.320232562 +0000 UTC m=+0.059760613 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 04:58:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:48.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:50.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:50 np0005533252 nova_compute[230010]: 2025-11-24 09:58:50.642 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:51 np0005533252 nova_compute[230010]: 2025-11-24 09:58:51.000 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:52.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:58:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:54.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:58:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:58:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:55 np0005533252 nova_compute[230010]: 2025-11-24 09:58:55.646 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:56 np0005533252 nova_compute[230010]: 2025-11-24 09:58:56.003 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:58:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 04:58:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 04:58:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:58.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:58:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:58:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:58:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:59:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:59:00 np0005533252 nova_compute[230010]: 2025-11-24 09:59:00.649 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:01 np0005533252 nova_compute[230010]: 2025-11-24 09:59:01.005 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:02.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:05 np0005533252 nova_compute[230010]: 2025-11-24 09:59:05.689 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:06 np0005533252 nova_compute[230010]: 2025-11-24 09:59:06.008 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:06 np0005533252 podman[240099]: 2025-11-24 09:59:06.334876973 +0000 UTC m=+0.070706699 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 04:59:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:06.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:08.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:10 np0005533252 nova_compute[230010]: 2025-11-24 09:59:10.693 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:11 np0005533252 nova_compute[230010]: 2025-11-24 09:59:11.011 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:11 np0005533252 podman[240146]: 2025-11-24 09:59:11.327418179 +0000 UTC m=+0.071297575 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 04:59:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:12.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.670445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353670480, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1711, "num_deletes": 255, "total_data_size": 4502186, "memory_usage": 4569200, "flush_reason": "Manual Compaction"}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353684976, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2881479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28427, "largest_seqno": 30133, "table_properties": {"data_size": 2874413, "index_size": 4073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14851, "raw_average_key_size": 19, "raw_value_size": 2860098, "raw_average_value_size": 3768, "num_data_blocks": 179, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978217, "oldest_key_time": 1763978217, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14591 microseconds, and 7200 cpu microseconds.
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.685030) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2881479 bytes OK
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.685059) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686645) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686664) EVENT_LOG_v1 {"time_micros": 1763978353686657, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4494274, prev total WAL file size 4494274, number of live WAL files 2.
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2813KB)], [54(13MB)]
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353688272, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17267122, "oldest_snapshot_seqno": -1}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6082 keys, 17121643 bytes, temperature: kUnknown
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353777908, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17121643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17077642, "index_size": 27699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154781, "raw_average_key_size": 25, "raw_value_size": 16964653, "raw_average_value_size": 2789, "num_data_blocks": 1134, "num_entries": 6082, "num_filter_entries": 6082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.778159) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17121643 bytes
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.779287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 190.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.7 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 6610, records dropped: 528 output_compression: NoCompression
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.779305) EVENT_LOG_v1 {"time_micros": 1763978353779297, "job": 32, "event": "compaction_finished", "compaction_time_micros": 89702, "compaction_time_cpu_micros": 30929, "output_level": 6, "num_output_files": 1, "total_output_size": 17121643, "num_input_records": 6610, "num_output_records": 6082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353779974, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353782599, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:13 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-09:59:13.782716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 04:59:14 np0005533252 ovn_controller[132966]: 2025-11-24T09:59:14Z|00078|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 24 04:59:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:14 np0005533252 nova_compute[230010]: 2025-11-24 09:59:14.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:59:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:59:15 np0005533252 nova_compute[230010]: 2025-11-24 09:59:15.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:15 np0005533252 nova_compute[230010]: 2025-11-24 09:59:15.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:16 np0005533252 nova_compute[230010]: 2025-11-24 09:59:16.015 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:16 np0005533252 nova_compute[230010]: 2025-11-24 09:59:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:16 np0005533252 nova_compute[230010]: 2025-11-24 09:59:16.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 04:59:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:17 np0005533252 nova_compute[230010]: 2025-11-24 09:59:17.496 230014 DEBUG nova.compute.manager [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 04:59:17 np0005533252 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG nova.compute.manager [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-2ad41fbf-b749-4394-9d14-483c127ff44c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 04:59:17 np0005533252 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:59:17 np0005533252 nova_compute[230010]: 2025-11-24 09:59:17.497 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:59:17 np0005533252 nova_compute[230010]: 2025-11-24 09:59:17.498 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 04:59:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:18 np0005533252 nova_compute[230010]: 2025-11-24 09:59:18.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:19 np0005533252 podman[240177]: 2025-11-24 09:59:19.303341801 +0000 UTC m=+0.047236866 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 04:59:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.061 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:59:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:59:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:59:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:59:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.375 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated VIF entry in instance network info cache for port 2ad41fbf-b749-4394-9d14-483c127ff44c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.375 230014 DEBUG nova.network.neutron [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.400 230014 DEBUG oslo_concurrency.lockutils [req-132dc78c-9235-4eab-bb3c-e515d222fdd7 req-afe5557f-32a0-441f-8ee2-7e9f91c6530e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.698 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.782 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.783 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 04:59:20 np0005533252 nova_compute[230010]: 2025-11-24 09:59:20.783 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:59:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.017 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:59:21 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3070747134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.243 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.302 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.514 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.515 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4759MB free_disk=59.89700698852539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.516 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.516 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.575 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.576 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.576 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.593 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.625 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.626 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.641 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.661 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 04:59:21 np0005533252 nova_compute[230010]: 2025-11-24 09:59:21.694 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 04:59:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 04:59:22 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2311081532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 04:59:22 np0005533252 nova_compute[230010]: 2025-11-24 09:59:22.141 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 04:59:22 np0005533252 nova_compute[230010]: 2025-11-24 09:59:22.149 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 04:59:22 np0005533252 nova_compute[230010]: 2025-11-24 09:59:22.167 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 04:59:22 np0005533252 nova_compute[230010]: 2025-11-24 09:59:22.169 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 04:59:22 np0005533252 nova_compute[230010]: 2025-11-24 09:59:22.169 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 04:59:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.170 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.171 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.171 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.334 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 24 04:59:23 np0005533252 nova_compute[230010]: 2025-11-24 09:59:23.335 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:59:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:24.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 04:59:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 04:59:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:24.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:25 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:59:25 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 04:59:25 np0005533252 nova_compute[230010]: 2025-11-24 09:59:25.701 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:26 np0005533252 nova_compute[230010]: 2025-11-24 09:59:26.020 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:26.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.362 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 04:59:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 04:59:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:28.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.386 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.386 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:28.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.975 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:28 np0005533252 nova_compute[230010]: 2025-11-24 09:59:28.975 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 04:59:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 04:59:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 04:59:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:29 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:59:29 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 04:59:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:59:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:59:30 np0005533252 nova_compute[230010]: 2025-11-24 09:59:30.707 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:30.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:31 np0005533252 nova_compute[230010]: 2025-11-24 09:59:31.023 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:32.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:34.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:35 np0005533252 nova_compute[230010]: 2025-11-24 09:59:35.710 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:36 np0005533252 nova_compute[230010]: 2025-11-24 09:59:36.026 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:59:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:36.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:59:37 np0005533252 podman[240382]: 2025-11-24 09:59:37.337869651 +0000 UTC m=+0.073577000 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 04:59:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:38.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:38.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:59:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:40.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:59:40 np0005533252 nova_compute[230010]: 2025-11-24 09:59:40.715 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:59:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:59:41 np0005533252 nova_compute[230010]: 2025-11-24 09:59:41.028 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:42.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:42 np0005533252 podman[240405]: 2025-11-24 09:59:42.409026059 +0000 UTC m=+0.140635230 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 04:59:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:42.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 04:59:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 04:59:45 np0005533252 nova_compute[230010]: 2025-11-24 09:59:45.757 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:46 np0005533252 nova_compute[230010]: 2025-11-24 09:59:46.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:46 np0005533252 ovn_controller[132966]: 2025-11-24T09:59:46Z|00079|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 24 04:59:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:48.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:50 np0005533252 podman[240461]: 2025-11-24 09:59:50.317522132 +0000 UTC m=+0.057137028 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 04:59:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:50.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:50 np0005533252 nova_compute[230010]: 2025-11-24 09:59:50.799 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:51 np0005533252 nova_compute[230010]: 2025-11-24 09:59:51.033 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 04:59:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:52.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 04:59:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:54.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:55 np0005533252 nova_compute[230010]: 2025-11-24 09:59:55.813 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:56 np0005533252 nova_compute[230010]: 2025-11-24 09:59:56.037 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:56 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:59:56.737 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 04:59:56 np0005533252 nova_compute[230010]: 2025-11-24 09:59:56.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:56 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 09:59:56.739 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 04:59:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 04:59:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 04:59:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 04:59:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 04:59:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:59.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 04:59:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.678 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.679 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.690 230014 DEBUG nova.objects.instance [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.711 230014 DEBUG nova.virt.libvirt.vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.712 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.713 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.718 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.721 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.724 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Attempting to detach device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.724 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:df:72:0f"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <target dev="tap2ad41fbf-b7"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.730 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.733 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <name>instance-00000006</name>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='tapbf41c673-48'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:df:72:0f'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='tap2ad41fbf-b7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='net1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.735 230014 INFO nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the persistent domain config.#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.736 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] (1/8): Attempting to detach device tap2ad41fbf-b7 with device alias net1 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.737 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] detach device xml: <interface type="ethernet">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <mac address="fa:16:3e:df:72:0f"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <model type="virtio"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <driver name="vhost" rx_queue_size="512"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <mtu size="1442"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <target dev="tap2ad41fbf-b7"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </interface>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 24 04:59:59 np0005533252 kernel: tap2ad41fbf-b7 (unregistering): left promiscuous mode
Nov 24 04:59:59 np0005533252 NetworkManager[48870]: <info>  [1763978399.7959] device (tap2ad41fbf-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 04:59:59 np0005533252 ovn_controller[132966]: 2025-11-24T09:59:59Z|00080|binding|INFO|Releasing lport 2ad41fbf-b749-4394-9d14-483c127ff44c from this chassis (sb_readonly=0)
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.803 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 ovn_controller[132966]: 2025-11-24T09:59:59Z|00081|binding|INFO|Setting lport 2ad41fbf-b749-4394-9d14-483c127ff44c down in Southbound
Nov 24 04:59:59 np0005533252 ovn_controller[132966]: 2025-11-24T09:59:59Z|00082|binding|INFO|Removing iface tap2ad41fbf-b7 ovn-installed in OVS
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.805 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.814 230014 DEBUG nova.virt.libvirt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Received event <DeviceRemovedEvent: 1763978399.813985, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.817 230014 DEBUG nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Start waiting for the detach event from libvirt for device tap2ad41fbf-b7 with device alias net1 for instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.817 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.821 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.822 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <name>instance-00000006</name>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:58:35</nova:creationTime>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:port uuid="2ad41fbf-b749-4394-9d14-483c127ff44c">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <resource>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </resource>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <system>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </system>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <os>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </os>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <features>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </features>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </clock>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <devices>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </controller>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target dev='tapbf41c673-48'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </interface>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      </target>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </serial>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </console>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </input>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <video>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </video>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </rng>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </devices>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </domain>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.822 230014 INFO nova.virt.libvirt.driver [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully detached device tap2ad41fbf-b7 from instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6 from the live domain config.#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.823 230014 DEBUG nova.virt.libvirt.vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.823 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.824 230014 DEBUG nova.network.os_vif_util [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.824 230014 DEBUG os_vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.827 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.827 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad41fbf-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.828 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.830 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.833 230014 INFO os_vif [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')#033[00m
Nov 24 04:59:59 np0005533252 nova_compute[230010]: 2025-11-24 09:59:59.834 230014 DEBUG nova.virt.libvirt.guest [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 04:59:59 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 04:59:59 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 04:59:59 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 05:00:00 np0005533252 ceph-mon[80009]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 05:00:00 np0005533252 ceph-mon[80009]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 24 05:00:00 np0005533252 ceph-mon[80009]:    daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1 is in unknown state
Nov 24 05:00:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:00:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:00:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:01.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.025 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:72:0f 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=2ad41fbf-b749-4394-9d14-483c127ff44c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.027 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad41fbf-b749-4394-9d14-483c127ff44c in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d unbound from our chassis#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.028 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbb18554-4df6-4004-8b94-6d2a9b50722d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.033 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[80370e02-85d1-43b6-aa06-158cf1cc5b54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.035 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace which is not needed anymore#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.040 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:01 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : haproxy version is 2.8.14-c23fe91
Nov 24 05:00:01 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [NOTICE]   (240002) : path to executable is /usr/sbin/haproxy
Nov 24 05:00:01 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [WARNING]  (240002) : Exiting Master process...
Nov 24 05:00:01 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [ALERT]    (240002) : Current worker (240004) exited with code 143 (Terminated)
Nov 24 05:00:01 np0005533252 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[239998]: [WARNING]  (240002) : All workers exited. Exiting... (0)
Nov 24 05:00:01 np0005533252 systemd[1]: libpod-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope: Deactivated successfully.
Nov 24 05:00:01 np0005533252 podman[240508]: 2025-11-24 10:00:01.178676951 +0000 UTC m=+0.049525312 container died bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 05:00:01 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b-userdata-shm.mount: Deactivated successfully.
Nov 24 05:00:01 np0005533252 systemd[1]: var-lib-containers-storage-overlay-447107754eda77794034edf91920a06d35d4d1b91593ad5057e2c61b459718a4-merged.mount: Deactivated successfully.
Nov 24 05:00:01 np0005533252 podman[240508]: 2025-11-24 10:00:01.237505869 +0000 UTC m=+0.108354200 container cleanup bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 05:00:01 np0005533252 systemd[1]: libpod-conmon-bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b.scope: Deactivated successfully.
Nov 24 05:00:01 np0005533252 podman[240537]: 2025-11-24 10:00:01.313365965 +0000 UTC m=+0.054347590 container remove bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.320 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e6888668-e64d-4177-87b8-b24216276ee2]: (4, ('Mon Nov 24 10:00:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b)\nbafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b\nMon Nov 24 10:00:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (bafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b)\nbafa73c024b9dc95537a27e37980736f3d07e3968334d315c04d285dcb5be79b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.323 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[407e496f-f11d-4e2c-b60b-a2a2ea194c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.324 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:00:01 np0005533252 kernel: tapcbb18554-40: left promiscuous mode
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.326 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.337 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.340 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[485323da-2229-4641-858a-2f8eb1209990]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.363 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[97f3a8af-c7c3-4953-950f-b15712d75e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.364 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[dc36bdf8-6fab-4569-be22-709ca5d184e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.368 230014 DEBUG nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.368 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG oslo_concurrency.lockutils [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.369 230014 DEBUG nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.370 230014 WARNING nova.compute.manager [req-c5d26cc7-b49e-42f4-b3cb-db63a71030de req-726ab591-07ad-42fa-ba14-629ee8e68a4a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-unplugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.379 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[2f14dd6e-4873-4592-9c7f-219c9ac5df17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425925, 'reachable_time': 29991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240553, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.385 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 05:00:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:01.386 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dc2720-0b84-42d5-8f05-99b4b94c8420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:01 np0005533252 systemd[1]: run-netns-ovnmeta\x2dcbb18554\x2d4df6\x2d4004\x2d8b94\x2d6d2a9b50722d.mount: Deactivated successfully.
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.503 230014 DEBUG nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 DEBUG nova.compute.manager [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-deleted-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 INFO nova.compute.manager [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Neutron deleted interface 2ad41fbf-b749-4394-9d14-483c127ff44c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.590 230014 DEBUG nova.network.neutron [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.612 230014 DEBUG nova.objects.instance [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'system_metadata' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.633 230014 DEBUG nova.objects.instance [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lazy-loading 'flavor' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.653 230014 DEBUG nova.virt.libvirt.vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.654 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.654 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.660 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.666 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <name>instance-00000006</name>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <resource>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </resource>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <system>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </system>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <os>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </os>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <features>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </features>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </clock>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <devices>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='tapbf41c673-48'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </interface>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </target>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </serial>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </console>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <video>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </video>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </rng>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </devices>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: </domain>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.667 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.674 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:df:72:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2ad41fbf-b7"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <name>instance-00000006</name>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <uuid>62465e3c-a372-4121-8a2e-5e10d1c3faf6</uuid>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 09:59:59</nova:creationTime>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <memory unit='KiB'>131072</memory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <vcpu placement='static'>1</vcpu>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <resource>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <partition>/machine</partition>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </resource>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <sysinfo type='smbios'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <system>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='manufacturer'>RDO</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='product'>OpenStack Compute</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='serial'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='uuid'>62465e3c-a372-4121-8a2e-5e10d1c3faf6</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <entry name='family'>Virtual Machine</entry>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </system>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <os>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <boot dev='hd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <smbios mode='sysinfo'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </os>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <features>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <vmcoreinfo state='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </features>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <cpu mode='custom' match='exact' check='full'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <vendor>AMD</vendor>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='x2apic'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc-deadline'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='hypervisor'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='tsc_adjust'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='spec-ctrl'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='stibp'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='cmp_legacy'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='overflow-recov'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='succor'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='ibrs'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='amd-ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='virt-ssbd'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='lbrv'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='tsc-scale'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='vmcb-clean'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='flushbyasid'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pause-filter'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='pfthreshold'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='xsaves'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='svm'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='require' name='topoext'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='npt'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <feature policy='disable' name='nrip-save'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <clock offset='utc'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='pit' tickpolicy='delay'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <timer name='hpet' present='no'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </clock>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_poweroff>destroy</on_poweroff>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_reboot>restart</on_reboot>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <on_crash>destroy</on_crash>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <devices>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <disk type='network' device='disk'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk' index='2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='vda' bus='virtio'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='virtio-disk0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <disk type='network' device='cdrom'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='qemu' type='raw' cache='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <auth username='openstack'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <secret type='ceph' uuid='84a084c3-61a7-5de7-8207-1f88efa59a64'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source protocol='rbd' name='vms/62465e3c-a372-4121-8a2e-5e10d1c3faf6_disk.config' index='1'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.100' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.102' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <host name='192.168.122.101' port='6789'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='sda' bus='sata'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <readonly/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='sata0-0-0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='0' model='pcie-root'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pcie.0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='1' port='0x10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='2' port='0x11'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='3' port='0x12'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='4' port='0x13'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='5' port='0x14'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='6' port='0x15'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='7' port='0x16'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='8' port='0x17'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.8'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='9' port='0x18'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.9'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='10' port='0x19'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='11' port='0x1a'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.11'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='12' port='0x1b'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.12'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='13' port='0x1c'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.13'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='14' port='0x1d'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.14'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='15' port='0x1e'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.15'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='16' port='0x1f'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.16'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='17' port='0x20'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.17'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='18' port='0x21'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.18'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='19' port='0x22'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.19'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='20' port='0x23'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.20'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='21' port='0x24'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.21'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='22' port='0x25'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.22'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='23' port='0x26'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.23'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='24' port='0x27'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.24'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-root-port'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target chassis='25' port='0x28'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.25'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model name='pcie-pci-bridge'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='pci.26'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='usb'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <controller type='sata' index='0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='ide'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </controller>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <interface type='ethernet'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <mac address='fa:16:3e:99:a7:ce'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target dev='tapbf41c673-48'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model type='virtio'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <driver name='vhost' rx_queue_size='512'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <mtu size='1442'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='net0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </interface>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <serial type='pty'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target type='isa-serial' port='0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:        <model name='isa-serial'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      </target>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </serial>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <console type='pty' tty='/dev/pts/0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <source path='/dev/pts/0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <log file='/var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6/console.log' append='off'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <target type='serial' port='0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='serial0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </console>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='tablet' bus='usb'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='usb' bus='0' port='1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='mouse' bus='ps2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input1'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <input type='keyboard' bus='ps2'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='input2'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </input>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <listen type='address' address='::0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </graphics>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <audio id='1' type='none'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <video>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <model type='virtio' heads='1' primary='yes'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='video0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </video>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <watchdog model='itco' action='reset'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='watchdog0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </watchdog>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <memballoon model='virtio'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <stats period='10'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='balloon0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <rng model='virtio'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <backend model='random'>/dev/urandom</backend>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <alias name='rng0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </rng>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </devices>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <label>system_u:system_r:svirt_t:s0:c516,c926</label>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c516,c926</imagelabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <label>+107:+107</label>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <imagelabel>+107:+107</imagelabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </seclabel>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: </domain>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.675 230014 WARNING nova.virt.libvirt.driver [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Detaching interface fa:16:3e:df:72:0f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap2ad41fbf-b7' not found.#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.675 230014 DEBUG nova.virt.libvirt.vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converting VIF {"id": "2ad41fbf-b749-4394-9d14-483c127ff44c", "address": "fa:16:3e:df:72:0f", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad41fbf-b7", "ovs_interfaceid": "2ad41fbf-b749-4394-9d14-483c127ff44c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG nova.network.os_vif_util [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.676 230014 DEBUG os_vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad41fbf-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.678 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.681 230014 INFO os_vif [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:72:0f,bridge_name='br-int',has_traffic_filtering=True,id=2ad41fbf-b749-4394-9d14-483c127ff44c,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad41fbf-b7')#033[00m
Nov 24 05:00:01 np0005533252 nova_compute[230010]: 2025-11-24 10:00:01.681 230014 DEBUG nova.virt.libvirt.guest [req-114dc47e-ec53-4792-86d7-911b83a61110 req-9b1cf8bf-e81d-4a36-9b80-4f948437a1df 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:name>tempest-TestNetworkBasicOps-server-1468987490</nova:name>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:creationTime>2025-11-24 10:00:01</nova:creationTime>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:flavor name="m1.nano">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:memory>128</nova:memory>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:disk>1</nova:disk>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:swap>0</nova:swap>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:ephemeral>0</nova:ephemeral>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:vcpus>1</nova:vcpus>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:flavor>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:owner>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  <nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    <nova:port uuid="bf41c673-482b-42e3-ac98-475b716fa0e9">
Nov 24 05:00:01 np0005533252 nova_compute[230010]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:    </nova:port>
Nov 24 05:00:01 np0005533252 nova_compute[230010]:  </nova:ports>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: </nova:instance>
Nov 24 05:00:01 np0005533252 nova_compute[230010]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 24 05:00:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:02 np0005533252 nova_compute[230010]: 2025-11-24 10:00:02.748 230014 INFO nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Port 2ad41fbf-b749-4394-9d14-483c127ff44c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 24 05:00:02 np0005533252 nova_compute[230010]: 2025-11-24 10:00:02.749 230014 DEBUG nova.network.neutron [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [{"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:00:02 np0005533252 nova_compute[230010]: 2025-11-24 10:00:02.763 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:00:02 np0005533252 nova_compute[230010]: 2025-11-24 10:00:02.790 230014 DEBUG oslo_concurrency.lockutils [None req-f3ac20a1-7545-43d3-b997-341d1779d8bb 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "interface-62465e3c-a372-4121-8a2e-5e10d1c3faf6-2ad41fbf-b749-4394-9d14-483c127ff44c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:03.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:03 np0005533252 ovn_controller[132966]: 2025-11-24T10:00:03Z|00083|binding|INFO|Releasing lport 51ab5aa5-77bf-4bb7-993e-d15c7b4540ff from this chassis (sb_readonly=0)
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.077 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.449 230014 DEBUG nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG oslo_concurrency.lockutils [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.450 230014 DEBUG nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.451 230014 WARNING nova.compute.manager [req-1dddba55-493e-484f-bce7-4329c06e4e37 req-880dddfd-0e79-4384-bf68-ae080f78b2fc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-2ad41fbf-b749-4394-9d14-483c127ff44c for instance with vm_state active and task_state None.#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.890 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.891 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.893 230014 INFO nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Terminating instance#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.894 230014 DEBUG nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 05:00:03 np0005533252 kernel: tapbf41c673-48 (unregistering): left promiscuous mode
Nov 24 05:00:03 np0005533252 NetworkManager[48870]: <info>  [1763978403.9430] device (tapbf41c673-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 05:00:03 np0005533252 ovn_controller[132966]: 2025-11-24T10:00:03Z|00084|binding|INFO|Releasing lport bf41c673-482b-42e3-ac98-475b716fa0e9 from this chassis (sb_readonly=0)
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.954 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:03 np0005533252 ovn_controller[132966]: 2025-11-24T10:00:03Z|00085|binding|INFO|Setting lport bf41c673-482b-42e3-ac98-475b716fa0e9 down in Southbound
Nov 24 05:00:03 np0005533252 ovn_controller[132966]: 2025-11-24T10:00:03Z|00086|binding|INFO|Removing iface tapbf41c673-48 ovn-installed in OVS
Nov 24 05:00:03 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.961 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:a7:ce 10.100.0.8'], port_security=['fa:16:3e:99:a7:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '62465e3c-a372-4121-8a2e-5e10d1c3faf6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81f18750-9169-4587-b6ca-88a2bbc58afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebde3e26-b896-444f-b8ef-f2f39010ba47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42f28b30-955e-4ea5-b415-d62763a6e220, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=bf41c673-482b-42e3-ac98-475b716fa0e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:00:03 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.964 142336 INFO neutron.agent.ovn.metadata.agent [-] Port bf41c673-482b-42e3-ac98-475b716fa0e9 in datapath 81f18750-9169-4587-b6ca-88a2bbc58afc unbound from our chassis#033[00m
Nov 24 05:00:03 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.965 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81f18750-9169-4587-b6ca-88a2bbc58afc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 05:00:03 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.966 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f18980-f091-424f-92a3-cfa7bc900d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:03 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:03.966 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc namespace which is not needed anymore#033[00m
Nov 24 05:00:03 np0005533252 nova_compute[230010]: 2025-11-24 10:00:03.978 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 24 05:00:04 np0005533252 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 19.209s CPU time.
Nov 24 05:00:04 np0005533252 systemd-machined[193537]: Machine qemu-4-instance-00000006 terminated.
Nov 24 05:00:04 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : haproxy version is 2.8.14-c23fe91
Nov 24 05:00:04 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [NOTICE]   (239552) : path to executable is /usr/sbin/haproxy
Nov 24 05:00:04 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [WARNING]  (239552) : Exiting Master process...
Nov 24 05:00:04 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [ALERT]    (239552) : Current worker (239554) exited with code 143 (Terminated)
Nov 24 05:00:04 np0005533252 neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc[239548]: [WARNING]  (239552) : All workers exited. Exiting... (0)
Nov 24 05:00:04 np0005533252 systemd[1]: libpod-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope: Deactivated successfully.
Nov 24 05:00:04 np0005533252 podman[240581]: 2025-11-24 10:00:04.111299157 +0000 UTC m=+0.049614314 container died 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.141 230014 INFO nova.virt.libvirt.driver [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance destroyed successfully.#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.142 230014 DEBUG nova.objects.instance [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 62465e3c-a372-4121-8a2e-5e10d1c3faf6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:00:04 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e-userdata-shm.mount: Deactivated successfully.
Nov 24 05:00:04 np0005533252 systemd[1]: var-lib-containers-storage-overlay-8077583c93206f4b50fb98a5f2ccb3fea2a970b30dff429250e8ff4a1f0a34dc-merged.mount: Deactivated successfully.
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.155 230014 DEBUG nova.virt.libvirt.vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:57:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1468987490',display_name='tempest-TestNetworkBasicOps-server-1468987490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1468987490',id=6,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeLeNtgMDECCA396nl5/z6TsnAPH3kX9ECWzaWuLvptXvMaJaj/WlHKUFyFRR30PurvGrDvNN2g1Ij1pTu0Su2H0Am0Z6Y5TdOjAAQXOQr2HISwvDDFzD9t0aaelZEbhw==',key_name='tempest-TestNetworkBasicOps-1307688110',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-sy1yuug7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:05Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=62465e3c-a372-4121-8a2e-5e10d1c3faf6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.155 230014 DEBUG nova.network.os_vif_util [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "bf41c673-482b-42e3-ac98-475b716fa0e9", "address": "fa:16:3e:99:a7:ce", "network": {"id": "81f18750-9169-4587-b6ca-88a2bbc58afc", "bridge": "br-int", "label": "tempest-network-smoke--1543163911", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf41c673-48", "ovs_interfaceid": "bf41c673-482b-42e3-ac98-475b716fa0e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.156 230014 DEBUG nova.network.os_vif_util [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.156 230014 DEBUG os_vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 05:00:04 np0005533252 podman[240581]: 2025-11-24 10:00:04.157607639 +0000 UTC m=+0.095922776 container cleanup 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.160 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.161 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf41c673-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.165 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.168 230014 INFO os_vif [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:a7:ce,bridge_name='br-int',has_traffic_filtering=True,id=bf41c673-482b-42e3-ac98-475b716fa0e9,network=Network(81f18750-9169-4587-b6ca-88a2bbc58afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf41c673-48')#033[00m
Nov 24 05:00:04 np0005533252 systemd[1]: libpod-conmon-312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e.scope: Deactivated successfully.
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.192 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.193 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG oslo_concurrency.lockutils [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.194 230014 DEBUG nova.compute.manager [req-7cc2c4f1-8d87-4f88-847e-e6ba03235c04 req-bb25cb6f-76e2-403d-93af-c7abbaa1b434 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-unplugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 05:00:04 np0005533252 podman[240622]: 2025-11-24 10:00:04.22352935 +0000 UTC m=+0.040809088 container remove 312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.230 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9df17bde-0d51-4f81-93ca-ac873317f292]: (4, ('Mon Nov 24 10:00:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc (312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e)\n312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e\nMon Nov 24 10:00:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc (312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e)\n312d1c6778ee57afc3309ab922725b04a981e2b6ccef78fd9ebe4f51f074714e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.233 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[90aea0bd-8267-4677-9c32-f3e33ea3e770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.234 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81f18750-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:00:04 np0005533252 kernel: tap81f18750-90: left promiscuous mode
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.238 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.255 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e4894c0e-5464-4cb1-870c-6c6940c1c09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.268 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[eca7c9f2-e966-413e-aba7-dff06c737b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.270 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebf6150-3d24-4f4d-a8fc-4b276ba0b2e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.284 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1682351f-6e37-456e-8572-1974ee348156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422808, 'reachable_time': 38724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240653, 'error': None, 'target': 'ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.286 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81f18750-9169-4587-b6ca-88a2bbc58afc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.286 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[b740ddbc-c469-41fa-a77d-f0b0454ff82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:00:04 np0005533252 systemd[1]: run-netns-ovnmeta\x2d81f18750\x2d9169\x2d4587\x2db6ca\x2d88a2bbc58afc.mount: Deactivated successfully.
Nov 24 05:00:04 np0005533252 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 05:00:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:00:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:04.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:00:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:04 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:04.741 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.777 230014 INFO nova.virt.libvirt.driver [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deleting instance files /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6_del#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.778 230014 INFO nova.virt.libvirt.driver [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deletion of /var/lib/nova/instances/62465e3c-a372-4121-8a2e-5e10d1c3faf6_del complete#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.829 230014 INFO nova.compute.manager [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG oslo.service.loopingcall [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 05:00:04 np0005533252 nova_compute[230010]: 2025-11-24 10:00:04.830 230014 DEBUG nova.network.neutron [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 05:00:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:05.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.456 230014 DEBUG nova.network.neutron [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.474 230014 INFO nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.540 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing instance network info cache due to event network-changed-bf41c673-482b-42e3-ac98-475b716fa0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.541 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Refreshing network info cache for port bf41c673-482b-42e3-ac98-475b716fa0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.544 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.544 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.593 230014 DEBUG oslo_concurrency.processutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:00:05 np0005533252 nova_compute[230010]: 2025-11-24 10:00:05.714 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 05:00:06 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:00:06 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1218318765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.041 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.053 230014 DEBUG oslo_concurrency.processutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.061 230014 DEBUG nova.compute.provider_tree [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.068 230014 DEBUG nova.network.neutron [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.080 230014 DEBUG nova.scheduler.client.report [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.087 230014 DEBUG oslo_concurrency.lockutils [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-62465e3c-a372-4121-8a2e-5e10d1c3faf6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.088 230014 DEBUG nova.compute.manager [req-c2daf1ec-e795-4eac-8a3d-8f25bb3776b5 req-4c598591-94dc-49a9-ac5a-f3764a657bdd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-deleted-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.100 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.123 230014 INFO nova.scheduler.client.report [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 62465e3c-a372-4121-8a2e-5e10d1c3faf6#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.185 230014 DEBUG oslo_concurrency.lockutils [None req-2ac931c1-7027-4bc7-9b19-c8714b315e43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.263 230014 DEBUG nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.264 230014 DEBUG oslo_concurrency.lockutils [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "62465e3c-a372-4121-8a2e-5e10d1c3faf6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.265 230014 DEBUG nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] No waiting events found dispatching network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:00:06 np0005533252 nova_compute[230010]: 2025-11-24 10:00:06.265 230014 WARNING nova.compute.manager [req-083be48e-6895-47c2-a8c7-53c909a14133 req-896a1dd0-a19f-46f6-9af1-aa01d5c832a8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Received unexpected event network-vif-plugged-bf41c673-482b-42e3-ac98-475b716fa0e9 for instance with vm_state deleted and task_state None.#033[00m
Nov 24 05:00:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:00:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:06.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:00:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:07 np0005533252 podman[240703]: 2025-11-24 10:00:07.669823715 +0000 UTC m=+0.073907819 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 05:00:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:08.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:08 np0005533252 nova_compute[230010]: 2025-11-24 10:00:08.951 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:08 np0005533252 nova_compute[230010]: 2025-11-24 10:00:08.992 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:09.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:09 np0005533252 nova_compute[230010]: 2025-11-24 10:00:09.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:10.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:11 np0005533252 nova_compute[230010]: 2025-11-24 10:00:11.044 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:12.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:13 np0005533252 podman[240728]: 2025-11-24 10:00:13.364068964 +0000 UTC m=+0.102078125 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:00:14 np0005533252 nova_compute[230010]: 2025-11-24 10:00:14.166 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:00:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:00:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:14 np0005533252 nova_compute[230010]: 2025-11-24 10:00:14.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:15.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:00:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:00:16 np0005533252 nova_compute[230010]: 2025-11-24 10:00:16.046 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:16.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.615767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416615861, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1152, "num_deletes": 502, "total_data_size": 1864338, "memory_usage": 1900464, "flush_reason": "Manual Compaction"}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416623446, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 916203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30138, "largest_seqno": 31285, "table_properties": {"data_size": 911957, "index_size": 1386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13933, "raw_average_key_size": 19, "raw_value_size": 901005, "raw_average_value_size": 1261, "num_data_blocks": 60, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978354, "oldest_key_time": 1763978354, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 7702 microseconds, and 3340 cpu microseconds.
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.623486) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 916203 bytes OK
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.623506) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625062) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625075) EVENT_LOG_v1 {"time_micros": 1763978416625071, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1857716, prev total WAL file size 1857716, number of live WAL files 2.
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(894KB)], [57(16MB)]
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416625953, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18037846, "oldest_snapshot_seqno": -1}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5796 keys, 12163508 bytes, temperature: kUnknown
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416687938, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12163508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12126898, "index_size": 21012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149986, "raw_average_key_size": 25, "raw_value_size": 12024311, "raw_average_value_size": 2074, "num_data_blocks": 839, "num_entries": 5796, "num_filter_entries": 5796, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.688471) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12163508 bytes
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.692674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 290.3 rd, 195.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(33.0) write-amplify(13.3) OK, records in: 6796, records dropped: 1000 output_compression: NoCompression
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.692715) EVENT_LOG_v1 {"time_micros": 1763978416692697, "job": 34, "event": "compaction_finished", "compaction_time_micros": 62139, "compaction_time_cpu_micros": 27692, "output_level": 6, "num_output_files": 1, "total_output_size": 12163508, "num_input_records": 6796, "num_output_records": 5796, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416693279, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416700249, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.625775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:00:16.700439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:00:16 np0005533252 nova_compute[230010]: 2025-11-24 10:00:16.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:17.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:17 np0005533252 nova_compute[230010]: 2025-11-24 10:00:17.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:17 np0005533252 nova_compute[230010]: 2025-11-24 10:00:17.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:00:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:19 np0005533252 nova_compute[230010]: 2025-11-24 10:00:19.139 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978404.132737, 62465e3c-a372-4121-8a2e-5e10d1c3faf6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:00:19 np0005533252 nova_compute[230010]: 2025-11-24 10:00:19.141 230014 INFO nova.compute.manager [-] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] VM Stopped (Lifecycle Event)#033[00m
Nov 24 05:00:19 np0005533252 nova_compute[230010]: 2025-11-24 10:00:19.169 230014 DEBUG nova.compute.manager [None req-82e2906b-b0c5-411a-b6d4-35d47615bbb1 - - - - - -] [instance: 62465e3c-a372-4121-8a2e-5e10d1c3faf6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:00:19 np0005533252 nova_compute[230010]: 2025-11-24 10:00:19.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:19 np0005533252 nova_compute[230010]: 2025-11-24 10:00:19.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:20.062 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:21 np0005533252 nova_compute[230010]: 2025-11-24 10:00:21.049 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:21 np0005533252 podman[240759]: 2025-11-24 10:00:21.324958178 +0000 UTC m=+0.066627779 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 05:00:21 np0005533252 nova_compute[230010]: 2025-11-24 10:00:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:22.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.783 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:00:22 np0005533252 nova_compute[230010]: 2025-11-24 10:00:22.784 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:00:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:00:23 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2206353367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.237 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.410 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.412 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4975MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.413 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.413 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.469 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.488 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:00:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:00:23 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1856495997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.959 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.966 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.978 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.996 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:00:23 np0005533252 nova_compute[230010]: 2025-11-24 10:00:23.997 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:00:24 np0005533252 nova_compute[230010]: 2025-11-24 10:00:24.175 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:24.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:24 np0005533252 nova_compute[230010]: 2025-11-24 10:00:24.992 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:24 np0005533252 nova_compute[230010]: 2025-11-24 10:00:24.992 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:00:24 np0005533252 nova_compute[230010]: 2025-11-24 10:00:24.993 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:00:24 np0005533252 nova_compute[230010]: 2025-11-24 10:00:24.993 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:00:25 np0005533252 nova_compute[230010]: 2025-11-24 10:00:25.004 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:00:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:26 np0005533252 nova_compute[230010]: 2025-11-24 10:00:26.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:26.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:29 np0005533252 nova_compute[230010]: 2025-11-24 10:00:29.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:00:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 05:00:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 05:00:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:31 np0005533252 nova_compute[230010]: 2025-11-24 10:00:31.053 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:31 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:00:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:34 np0005533252 nova_compute[230010]: 2025-11-24 10:00:34.181 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:34.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:00:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:00:36 np0005533252 nova_compute[230010]: 2025-11-24 10:00:36.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:36 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:36 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:00:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:38 np0005533252 podman[241031]: 2025-11-24 10:00:38.355072613 +0000 UTC m=+0.087553271 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 05:00:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:00:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:00:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:39 np0005533252 nova_compute[230010]: 2025-11-24 10:00:39.182 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:41 np0005533252 nova_compute[230010]: 2025-11-24 10:00:41.058 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:42.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:44 np0005533252 nova_compute[230010]: 2025-11-24 10:00:44.185 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:44 np0005533252 podman[241055]: 2025-11-24 10:00:44.3736038 +0000 UTC m=+0.101476462 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 05:00:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:44.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:00:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:00:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:00:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:00:46 np0005533252 nova_compute[230010]: 2025-11-24 10:00:46.061 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:48.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:49 np0005533252 nova_compute[230010]: 2025-11-24 10:00:49.186 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:50 np0005533252 ovn_controller[132966]: 2025-11-24T10:00:50Z|00087|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 24 05:00:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:51 np0005533252 nova_compute[230010]: 2025-11-24 10:00:51.063 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:52 np0005533252 podman[241110]: 2025-11-24 10:00:52.32737537 +0000 UTC m=+0.061516495 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 05:00:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:54 np0005533252 nova_compute[230010]: 2025-11-24 10:00:54.188 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:56 np0005533252 nova_compute[230010]: 2025-11-24 10:00:56.064 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:00:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:00:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:00:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:00:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:00:59 np0005533252 nova_compute[230010]: 2025-11-24 10:00:59.189 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:00:59 np0005533252 nova_compute[230010]: 2025-11-24 10:00:59.886 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:00:59 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.886 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:00:59 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.887 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 05:00:59 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:00:59.888 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:01:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:01:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:01:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:01 np0005533252 nova_compute[230010]: 2025-11-24 10:01:01.066 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:04 np0005533252 nova_compute[230010]: 2025-11-24 10:01:04.193 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:04.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:06 np0005533252 nova_compute[230010]: 2025-11-24 10:01:06.067 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:09 np0005533252 nova_compute[230010]: 2025-11-24 10:01:09.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:09 np0005533252 podman[241176]: 2025-11-24 10:01:09.36836112 +0000 UTC m=+0.081360813 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:01:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:10.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:11 np0005533252 nova_compute[230010]: 2025-11-24 10:01:11.069 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:12.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:14 np0005533252 nova_compute[230010]: 2025-11-24 10:01:14.253 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:14.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:15 np0005533252 podman[241202]: 2025-11-24 10:01:15.339137704 +0000 UTC m=+0.078319219 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 05:01:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:01:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:01:15 np0005533252 nova_compute[230010]: 2025-11-24 10:01:15.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:16 np0005533252 nova_compute[230010]: 2025-11-24 10:01:16.093 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:16 np0005533252 nova_compute[230010]: 2025-11-24 10:01:16.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:19.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:19 np0005533252 nova_compute[230010]: 2025-11-24 10:01:19.256 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:19 np0005533252 nova_compute[230010]: 2025-11-24 10:01:19.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:19 np0005533252 nova_compute[230010]: 2025-11-24 10:01:19.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:01:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.063 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:01:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.064 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:01:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:01:20.064 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:01:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:21 np0005533252 nova_compute[230010]: 2025-11-24 10:01:21.141 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:21 np0005533252 nova_compute[230010]: 2025-11-24 10:01:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.786 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:01:22 np0005533252 nova_compute[230010]: 2025-11-24 10:01:22.787 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:01:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:01:23 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1862029681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.215 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:01:23 np0005533252 podman[241256]: 2025-11-24 10:01:23.333423945 +0000 UTC m=+0.078976065 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.384 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4960MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.385 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.436 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.451 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:01:23 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:01:23 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/518376627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.909 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.918 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.951 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.953 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:01:23 np0005533252 nova_compute[230010]: 2025-11-24 10:01:23.953 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.257 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.954 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.954 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.970 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:24 np0005533252 nova_compute[230010]: 2025-11-24 10:01:24.984 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:01:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:25.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:26 np0005533252 nova_compute[230010]: 2025-11-24 10:01:26.143 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:27.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:29.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:29 np0005533252 nova_compute[230010]: 2025-11-24 10:01:29.260 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:01:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:01:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:31 np0005533252 nova_compute[230010]: 2025-11-24 10:01:31.145 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:31.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:34 np0005533252 nova_compute[230010]: 2025-11-24 10:01:34.261 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:35.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:36 np0005533252 nova_compute[230010]: 2025-11-24 10:01:36.146 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:36 np0005533252 podman[241455]: 2025-11-24 10:01:36.729149787 +0000 UTC m=+0.057255933 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:01:36 np0005533252 podman[241455]: 2025-11-24 10:01:36.827753091 +0000 UTC m=+0.155859237 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 05:01:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:37 np0005533252 podman[241596]: 2025-11-24 10:01:37.307736074 +0000 UTC m=+0.050622480 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 05:01:37 np0005533252 podman[241596]: 2025-11-24 10:01:37.318667562 +0000 UTC m=+0.061553948 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 05:01:37 np0005533252 podman[241715]: 2025-11-24 10:01:37.696877943 +0000 UTC m=+0.062496412 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 05:01:37 np0005533252 podman[241737]: 2025-11-24 10:01:37.763546796 +0000 UTC m=+0.050900248 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 05:01:37 np0005533252 podman[241715]: 2025-11-24 10:01:37.776759469 +0000 UTC m=+0.142377958 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 05:01:38 np0005533252 podman[241782]: 2025-11-24 10:01:38.00302162 +0000 UTC m=+0.065954286 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793)
Nov 24 05:01:38 np0005533252 podman[241782]: 2025-11-24 10:01:38.022932937 +0000 UTC m=+0.085865623 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Nov 24 05:01:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 05:01:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 05:01:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:38.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:38 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:39 np0005533252 nova_compute[230010]: 2025-11-24 10:01:39.263 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:40 np0005533252 podman[241899]: 2025-11-24 10:01:40.326330568 +0000 UTC m=+0.066565360 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 05:01:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 05:01:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 05:01:41 np0005533252 nova_compute[230010]: 2025-11-24 10:01:41.148 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:01:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:01:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:01:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:01:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:44 np0005533252 nova_compute[230010]: 2025-11-24 10:01:44.265 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:01:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:01:46 np0005533252 nova_compute[230010]: 2025-11-24 10:01:46.150 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:46 np0005533252 podman[241922]: 2025-11-24 10:01:46.413265533 +0000 UTC m=+0.138016170 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 05:01:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:01:46 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:01:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:47 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:01:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:49 np0005533252 nova_compute[230010]: 2025-11-24 10:01:49.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:51 np0005533252 nova_compute[230010]: 2025-11-24 10:01:51.152 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:51.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:01:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:01:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:52 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:01:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:01:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:01:54 np0005533252 nova_compute[230010]: 2025-11-24 10:01:54.269 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:54 np0005533252 podman[242002]: 2025-11-24 10:01:54.324690414 +0000 UTC m=+0.061882086 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 05:01:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:01:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:54.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:56 np0005533252 nova_compute[230010]: 2025-11-24 10:01:56.153 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:58.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:01:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:01:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:01:59 np0005533252 nova_compute[230010]: 2025-11-24 10:01:59.272 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:01:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:02:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:02:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:01 np0005533252 nova_compute[230010]: 2025-11-24 10:02:01.154 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:04 np0005533252 nova_compute[230010]: 2025-11-24 10:02:04.275 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:06 np0005533252 nova_compute[230010]: 2025-11-24 10:02:06.156 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:09 np0005533252 nova_compute[230010]: 2025-11-24 10:02:09.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:11 np0005533252 nova_compute[230010]: 2025-11-24 10:02:11.158 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:11 np0005533252 podman[242054]: 2025-11-24 10:02:11.317086457 +0000 UTC m=+0.055960171 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 24 05:02:11 np0005533252 nova_compute[230010]: 2025-11-24 10:02:11.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:14 np0005533252 nova_compute[230010]: 2025-11-24 10:02:14.278 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:02:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:02:15 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:15.737 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:02:15 np0005533252 nova_compute[230010]: 2025-11-24 10:02:15.737 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:15 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:15.738 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 05:02:16 np0005533252 nova_compute[230010]: 2025-11-24 10:02:16.161 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:16 np0005533252 nova_compute[230010]: 2025-11-24 10:02:16.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:17 np0005533252 podman[242077]: 2025-11-24 10:02:17.357115517 +0000 UTC m=+0.093525642 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 05:02:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:18.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:18 np0005533252 nova_compute[230010]: 2025-11-24 10:02:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:19 np0005533252 nova_compute[230010]: 2025-11-24 10:02:19.324 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:20.065 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:20.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:20 np0005533252 nova_compute[230010]: 2025-11-24 10:02:20.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:20 np0005533252 nova_compute[230010]: 2025-11-24 10:02:20.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:02:21 np0005533252 nova_compute[230010]: 2025-11-24 10:02:21.163 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:21 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:02:21.740 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:02:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:22.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:23 np0005533252 nova_compute[230010]: 2025-11-24 10:02:23.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:23 np0005533252 nova_compute[230010]: 2025-11-24 10:02:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:23 np0005533252 nova_compute[230010]: 2025-11-24 10:02:23.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 05:02:23 np0005533252 nova_compute[230010]: 2025-11-24 10:02:23.777 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.772 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:02:24 np0005533252 nova_compute[230010]: 2025-11-24 10:02:24.796 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:02:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:02:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3755230538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:02:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.231 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:02:25 np0005533252 podman[242130]: 2025-11-24 10:02:25.314259567 +0000 UTC m=+0.049924163 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.374 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4961MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.375 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.496 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.496 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:02:25 np0005533252 nova_compute[230010]: 2025-11-24 10:02:25.554 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:02:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:02:26 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3722196526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.050 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.059 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.073 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.076 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.076 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:26 np0005533252 nova_compute[230010]: 2025-11-24 10:02:26.166 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.070 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.083 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.083 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:02:27 np0005533252 nova_compute[230010]: 2025-11-24 10:02:27.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 05:02:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:02:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:02:29 np0005533252 nova_compute[230010]: 2025-11-24 10:02:29.327 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:02:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:02:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:31 np0005533252 nova_compute[230010]: 2025-11-24 10:02:31.170 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:34 np0005533252 nova_compute[230010]: 2025-11-24 10:02:34.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:35.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:36 np0005533252 nova_compute[230010]: 2025-11-24 10:02:36.171 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:37.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:39.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:39 np0005533252 nova_compute[230010]: 2025-11-24 10:02:39.364 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:41 np0005533252 nova_compute[230010]: 2025-11-24 10:02:41.172 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:42 np0005533252 podman[242205]: 2025-11-24 10:02:42.329843425 +0000 UTC m=+0.072235440 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:02:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:43.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:44 np0005533252 nova_compute[230010]: 2025-11-24 10:02:44.366 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:44.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:02:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:02:46 np0005533252 nova_compute[230010]: 2025-11-24 10:02:46.177 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:48 np0005533252 podman[242228]: 2025-11-24 10:02:48.379313505 +0000 UTC m=+0.110069166 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 05:02:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:48.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:49 np0005533252 nova_compute[230010]: 2025-11-24 10:02:49.410 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:50.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:51 np0005533252 nova_compute[230010]: 2025-11-24 10:02:51.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:51.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:52 np0005533252 nova_compute[230010]: 2025-11-24 10:02:52.875 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:52 np0005533252 nova_compute[230010]: 2025-11-24 10:02:52.877 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:52 np0005533252 nova_compute[230010]: 2025-11-24 10:02:52.894 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:02:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.017 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.018 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.025 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.025 230014 INFO nova.compute.claims [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.142 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:02:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:02:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4175873145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.632 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.638 230014 DEBUG nova.compute.provider_tree [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.652 230014 DEBUG nova.scheduler.client.report [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.673 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.674 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.734 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.735 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.760 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.782 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.876 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.878 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.878 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating image(s)#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.904 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.931 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.957 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:02:53 np0005533252 nova_compute[230010]: 2025-11-24 10:02:53.962 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.035 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.036 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.037 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.037 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.059 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.064 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 16f34aac-788f-4079-9636-0db2c8de6422_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.323 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 16f34aac-788f-4079-9636-0db2c8de6422_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.409 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.536 230014 DEBUG nova.policy [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.546 230014 DEBUG nova.objects.instance [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.559 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.560 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Ensure instance console log exists: /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.560 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.561 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:02:54 np0005533252 nova_compute[230010]: 2025-11-24 10:02:54.561 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:02:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:55.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:56 np0005533252 nova_compute[230010]: 2025-11-24 10:02:56.180 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:56 np0005533252 podman[242554]: 2025-11-24 10:02:56.318702703 +0000 UTC m=+0.056357651 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 05:02:56 np0005533252 nova_compute[230010]: 2025-11-24 10:02:56.521 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Successfully created port: 99ae7646-7560-4043-bead-b1665083257c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 05:02:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:02:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:57.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.783 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Successfully updated port: 99ae7646-7560-4043-bead-b1665083257c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.799 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.872 230014 DEBUG nova.compute.manager [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.873 230014 DEBUG nova.compute.manager [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:02:57 np0005533252 nova_compute[230010]: 2025-11-24 10:02:57.873 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:02:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:02:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:02:58 np0005533252 nova_compute[230010]: 2025-11-24 10:02:58.466 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 05:02:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:02:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:02:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:59.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.453 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:02:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:02:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:02:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.520 230014 DEBUG nova.network.neutron [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance network_info: |[{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.537 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.538 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.540 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start _get_guest_xml network_info=[{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.545 230014 WARNING nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.563 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.564 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.571 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.571 230014 DEBUG nova.virt.libvirt.host [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.572 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.572 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.573 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.574 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.574 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.575 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.575 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.576 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.576 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.577 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.577 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.578 230014 DEBUG nova.virt.hardware [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 05:02:59 np0005533252 nova_compute[230010]: 2025-11-24 10:02:59.583 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/139457227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.041 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.069 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.074 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 05:03:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3946666677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.530 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.532 230014 DEBUG nova.virt.libvirt.vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:53Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.533 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.535 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.536 230014 DEBUG nova.objects.instance [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:03:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:00.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.914 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] End _get_guest_xml xml=<domain type="kvm">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <uuid>16f34aac-788f-4079-9636-0db2c8de6422</uuid>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <name>instance-0000000c</name>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-2027370088</nova:name>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 10:02:59</nova:creationTime>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <nova:port uuid="99ae7646-7560-4043-bead-b1665083257c">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <system>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="serial">16f34aac-788f-4079-9636-0db2c8de6422</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="uuid">16f34aac-788f-4079-9636-0db2c8de6422</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </system>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <os>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </os>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <features>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </features>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </clock>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  <devices>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/16f34aac-788f-4079-9636-0db2c8de6422_disk">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/16f34aac-788f-4079-9636-0db2c8de6422_disk.config">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:30:f8:b9"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <target dev="tap99ae7646-75"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </interface>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/console.log" append="off"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </serial>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <video>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </video>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </rng>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 05:03:00 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 05:03:00 np0005533252 nova_compute[230010]:  </devices>
Nov 24 05:03:00 np0005533252 nova_compute[230010]: </domain>
Nov 24 05:03:00 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.916 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Preparing to wait for external event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.916 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.917 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.917 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.918 230014 DEBUG nova.virt.libvirt.vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:53Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.918 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.919 230014 DEBUG nova.network.os_vif_util [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.919 230014 DEBUG os_vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.920 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.921 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.921 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99ae7646-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.926 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99ae7646-75, col_values=(('external_ids', {'iface-id': '99ae7646-7560-4043-bead-b1665083257c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:f8:b9', 'vm-uuid': '16f34aac-788f-4079-9636-0db2c8de6422'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.964 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:00 np0005533252 NetworkManager[48870]: <info>  [1763978580.9654] manager: (tap99ae7646-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.968 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.973 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:00 np0005533252 nova_compute[230010]: 2025-11-24 10:03:00.976 230014 INFO os_vif [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75')#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.029 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:30:f8:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.030 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Using config drive#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.060 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.182 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:01.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.509 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.510 230014 DEBUG nova.network.neutron [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.518 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Creating config drive at /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.523 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnegmzrpt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.542 230014 DEBUG oslo_concurrency.lockutils [req-b1ce6229-6cf4-49b0-9571-225463bf2b16 req-26514d08-eb9d-4065-bb32-3fdfe8063604 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.649 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnegmzrpt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.677 230014 DEBUG nova.storage.rbd_utils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 16f34aac-788f-4079-9636-0db2c8de6422_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.681 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config 16f34aac-788f-4079-9636-0db2c8de6422_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.844 230014 DEBUG oslo_concurrency.processutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config 16f34aac-788f-4079-9636-0db2c8de6422_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.846 230014 INFO nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deleting local config drive /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422/disk.config because it was imported into RBD.#033[00m
Nov 24 05:03:01 np0005533252 systemd[1]: Starting libvirt secret daemon...
Nov 24 05:03:01 np0005533252 systemd[1]: Started libvirt secret daemon.
Nov 24 05:03:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:03:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:03:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:03:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:03:01 np0005533252 kernel: tap99ae7646-75: entered promiscuous mode
Nov 24 05:03:01 np0005533252 NetworkManager[48870]: <info>  [1763978581.9602] manager: (tap99ae7646-75): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 24 05:03:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:01Z|00088|binding|INFO|Claiming lport 99ae7646-7560-4043-bead-b1665083257c for this chassis.
Nov 24 05:03:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:01Z|00089|binding|INFO|99ae7646-7560-4043-bead-b1665083257c: Claiming fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.961 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.967 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:01 np0005533252 nova_compute[230010]: 2025-11-24 10:03:01.972 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:01 np0005533252 NetworkManager[48870]: <info>  [1763978581.9729] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 24 05:03:01 np0005533252 NetworkManager[48870]: <info>  [1763978581.9736] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 24 05:03:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.978 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:f8:b9 10.100.0.6'], port_security=['fa:16:3e:30:f8:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16f34aac-788f-4079-9636-0db2c8de6422', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0394b1e1-eb4e-4c88-8aad-cca296ee6f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=99ae7646-7560-4043-bead-b1665083257c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:03:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.981 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 99ae7646-7560-4043-bead-b1665083257c in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 bound to our chassis#033[00m
Nov 24 05:03:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.983 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094#033[00m
Nov 24 05:03:01 np0005533252 systemd-udevd[242756]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 05:03:01 np0005533252 systemd-machined[193537]: New machine qemu-5-instance-0000000c.
Nov 24 05:03:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.998 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[44c82093-949f-43ab-beef-5c33852a5cec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:01.999 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9ce2622-51 in ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.001 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9ce2622-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.001 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6665be77-3dae-4cf1-a497-be601fee1fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.002 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[54bacb12-2c3e-4978-8fc6-f3a1deca6a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.016 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf8b391-528c-4db9-96b4-20fb2ca3c52e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 NetworkManager[48870]: <info>  [1763978582.0199] device (tap99ae7646-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 05:03:02 np0005533252 NetworkManager[48870]: <info>  [1763978582.0225] device (tap99ae7646-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 05:03:02 np0005533252 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.043 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[5768ebf8-6f7f-47a1-b83e-bff4dd321597]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:02Z|00090|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c up in Southbound
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.079 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[37bdd7a2-7684-4a40-bf27-39fb25df1858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 NetworkManager[48870]: <info>  [1763978582.0929] manager: (tapd9ce2622-50): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.092 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 systemd-udevd[242759]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.092 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6a1acf-c2e0-40e4-ad25-3ee6a3ed7250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:02Z|00091|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c ovn-installed in OVS
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.099 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.124 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[8d591c01-ffc0-4c64-9738-2ec50c37a110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.128 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[0a53dfef-1d68-4531-98bd-665314dd694e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 NetworkManager[48870]: <info>  [1763978582.1538] device (tapd9ce2622-50): carrier: link connected
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.158 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8bab8b-16b9-401b-a22a-d23eb322e1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.185 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ded259bd-4cbc-43a8-b907-de8bd1992357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452570, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242788, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.206 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3d153ff4-27f2-443d-8cb2-cfa8f2689be4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:68d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452570, 'tstamp': 452570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242789, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.226 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac55f7e-3683-46a9-9975-8c60fc9a7a10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452570, 'reachable_time': 33730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242790, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.247 230014 DEBUG nova.compute.manager [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.247 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG oslo_concurrency.lockutils [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.248 230014 DEBUG nova.compute.manager [req-cbea8a1e-7000-4466-9ebd-a28c87be622b req-e66a502a-ce2f-43ef-bc72-53223711dafc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Processing event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.269 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[12d1f13d-dbef-4cdc-87a4-a5b9d5a0ff78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.344 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b2213430-6c00-45b2-af14-214ae994bf24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.346 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.347 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.348 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9ce2622-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.350 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 NetworkManager[48870]: <info>  [1763978582.3509] manager: (tapd9ce2622-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 24 05:03:02 np0005533252 kernel: tapd9ce2622-50: entered promiscuous mode
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.353 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.355 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9ce2622-50, col_values=(('external_ids', {'iface-id': '7ff70316-0c3c-4814-add9-f5919c7adc2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.356 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:02Z|00092|binding|INFO|Releasing lport 7ff70316-0c3c-4814-add9-f5919c7adc2b from this chassis (sb_readonly=0)
Nov 24 05:03:02 np0005533252 nova_compute[230010]: 2025-11-24 10:03:02.369 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.372 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.373 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9b13016a-696b-49e8-b6ed-0ef0ac2fc912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.375 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 05:03:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:02.376 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'env', 'PROCESS_TAG=haproxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 05:03:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:02 np0005533252 podman[242822]: 2025-11-24 10:03:02.776627853 +0000 UTC m=+0.053649435 container create 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 05:03:02 np0005533252 systemd[1]: Started libpod-conmon-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope.
Nov 24 05:03:02 np0005533252 podman[242822]: 2025-11-24 10:03:02.748071974 +0000 UTC m=+0.025093576 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 05:03:02 np0005533252 systemd[1]: Started libcrun container.
Nov 24 05:03:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2695d8c7141ff441362d396fc0649dcdddbdcd12afc2cf9c7f37256879ce4706/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 05:03:02 np0005533252 podman[242822]: 2025-11-24 10:03:02.879651566 +0000 UTC m=+0.156673178 container init 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 05:03:02 np0005533252 podman[242822]: 2025-11-24 10:03:02.884631537 +0000 UTC m=+0.161653109 container start 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 05:03:02 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : New worker (242844) forked
Nov 24 05:03:02 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : Loading success.
Nov 24 05:03:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.476 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.4758582, 16f34aac-788f-4079-9636-0db2c8de6422 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.477 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Started (Lifecycle Event)#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.479 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.483 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.486 230014 INFO nova.virt.libvirt.driver [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance spawned successfully.#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.486 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.501 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.504 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.512 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.512 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.513 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.513 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.514 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.514 230014 DEBUG nova.virt.libvirt.driver [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.476001, 16f34aac-788f-4079-9636-0db2c8de6422 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.521 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Paused (Lifecycle Event)#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.547 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.552 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978583.4820945, 16f34aac-788f-4079-9636-0db2c8de6422 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.552 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Resumed (Lifecycle Event)#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.570 230014 INFO nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.571 230014 DEBUG nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.573 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.581 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.608 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.643 230014 INFO nova.compute.manager [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 10.70 seconds to build instance.#033[00m
Nov 24 05:03:03 np0005533252 nova_compute[230010]: 2025-11-24 10:03:03.655 230014 DEBUG oslo_concurrency.lockutils [None req-86b4efd8-e845-47b8-839d-e817d119699f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.323 230014 DEBUG oslo_concurrency.lockutils [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.324 230014 DEBUG nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:03:04 np0005533252 nova_compute[230010]: 2025-11-24 10:03:04.324 230014 WARNING nova.compute.manager [req-bb6482ee-29f4-40e0-878d-38f243a0e601 req-824ed1b5-4814-4d36-a892-91200dc0b833 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received unexpected event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c for instance with vm_state active and task_state None.#033[00m
Nov 24 05:03:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:05.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:05 np0005533252 nova_compute[230010]: 2025-11-24 10:03:05.966 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.184 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.680 230014 DEBUG nova.compute.manager [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG nova.compute.manager [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:03:06 np0005533252 nova_compute[230010]: 2025-11-24 10:03:06.681 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:03:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:06.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:07.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:07 np0005533252 nova_compute[230010]: 2025-11-24 10:03:07.727 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:03:07 np0005533252 nova_compute[230010]: 2025-11-24 10:03:07.728 230014 DEBUG nova.network.neutron [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:03:07 np0005533252 nova_compute[230010]: 2025-11-24 10:03:07.752 230014 DEBUG oslo_concurrency.lockutils [req-872716c2-50d9-4372-b648-6f0eae46be06 req-aad0e74e-3cca-41e8-8691-b5b1e111740c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:03:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:09.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:11 np0005533252 nova_compute[230010]: 2025-11-24 10:03:11.001 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:11 np0005533252 nova_compute[230010]: 2025-11-24 10:03:11.190 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:13.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:13 np0005533252 podman[242926]: 2025-11-24 10:03:13.34649471 +0000 UTC m=+0.074519196 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 05:03:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:14.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:03:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:03:16 np0005533252 nova_compute[230010]: 2025-11-24 10:03:16.004 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:16 np0005533252 nova_compute[230010]: 2025-11-24 10:03:16.196 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:16 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:16Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 05:03:16 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:16Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:f8:b9 10.100.0.6
Nov 24 05:03:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:17 np0005533252 nova_compute[230010]: 2025-11-24 10:03:17.775 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:18.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:18 np0005533252 nova_compute[230010]: 2025-11-24 10:03:18.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:19.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:19 np0005533252 podman[242950]: 2025-11-24 10:03:19.429205396 +0000 UTC m=+0.162987843 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 05:03:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.066 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.067 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:20.067 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:20 np0005533252 nova_compute[230010]: 2025-11-24 10:03:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:20 np0005533252 nova_compute[230010]: 2025-11-24 10:03:20.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:03:21 np0005533252 nova_compute[230010]: 2025-11-24 10:03:21.009 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:21 np0005533252 nova_compute[230010]: 2025-11-24 10:03:21.200 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:22 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:22.898 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:03:22 np0005533252 nova_compute[230010]: 2025-11-24 10:03:22.899 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:22 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:22.900 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 05:03:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:23 np0005533252 nova_compute[230010]: 2025-11-24 10:03:23.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.790 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.791 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:03:24 np0005533252 nova_compute[230010]: 2025-11-24 10:03:24.792 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:03:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1881790664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.225 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.289 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.290 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 05:03:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.438 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.439 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4767MB free_disk=59.89735412597656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.439 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.440 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 16f34aac-788f-4079-9636-0db2c8de6422 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.527 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:03:25 np0005533252 nova_compute[230010]: 2025-11-24 10:03:25.568 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:03:25 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1287250146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.009 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.012 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.016 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.037 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.058 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.058 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:26 np0005533252 nova_compute[230010]: 2025-11-24 10:03:26.201 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.068 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.068 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.069 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.070 230014 INFO nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Terminating instance#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.071 230014 DEBUG nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 05:03:27 np0005533252 kernel: tap99ae7646-75 (unregistering): left promiscuous mode
Nov 24 05:03:27 np0005533252 NetworkManager[48870]: <info>  [1763978607.1266] device (tap99ae7646-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 05:03:27 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:27Z|00093|binding|INFO|Releasing lport 99ae7646-7560-4043-bead-b1665083257c from this chassis (sb_readonly=0)
Nov 24 05:03:27 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:27Z|00094|binding|INFO|Setting lport 99ae7646-7560-4043-bead-b1665083257c down in Southbound
Nov 24 05:03:27 np0005533252 ovn_controller[132966]: 2025-11-24T10:03:27Z|00095|binding|INFO|Removing iface tap99ae7646-75 ovn-installed in OVS
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.135 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.144 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:f8:b9 10.100.0.6'], port_security=['fa:16:3e:30:f8:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '16f34aac-788f-4079-9636-0db2c8de6422', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0394b1e1-eb4e-4c88-8aad-cca296ee6f3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=99ae7646-7560-4043-bead-b1665083257c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.145 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 99ae7646-7560-4043-bead-b1665083257c in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 unbound from our chassis#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.146 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.148 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[25420972-6e1d-4d5a-a0f9-d24d141558d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.149 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace which is not needed anymore#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.154 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 24 05:03:27 np0005533252 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 15.782s CPU time.
Nov 24 05:03:27 np0005533252 systemd-machined[193537]: Machine qemu-5-instance-0000000c terminated.
Nov 24 05:03:27 np0005533252 podman[243025]: 2025-11-24 10:03:27.20514978 +0000 UTC m=+0.053933511 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 05:03:27 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : haproxy version is 2.8.14-c23fe91
Nov 24 05:03:27 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [NOTICE]   (242842) : path to executable is /usr/sbin/haproxy
Nov 24 05:03:27 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [WARNING]  (242842) : Exiting Master process...
Nov 24 05:03:27 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [ALERT]    (242842) : Current worker (242844) exited with code 143 (Terminated)
Nov 24 05:03:27 np0005533252 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[242838]: [WARNING]  (242842) : All workers exited. Exiting... (0)
Nov 24 05:03:27 np0005533252 systemd[1]: libpod-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope: Deactivated successfully.
Nov 24 05:03:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.324 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 podman[243067]: 2025-11-24 10:03:27.329960997 +0000 UTC m=+0.086154681 container died 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.338 230014 INFO nova.virt.libvirt.driver [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance destroyed successfully.#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.339 230014 DEBUG nova.objects.instance [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 16f34aac-788f-4079-9636-0db2c8de6422 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.348 230014 DEBUG nova.virt.libvirt.vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:02:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2027370088',display_name='tempest-TestNetworkBasicOps-server-2027370088',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2027370088',id=12,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGm7yTWaQNPdMC8QFfBuLjQRH6ApcYu+qgaY7VksG3yV1HCE4jpliKx7D8r+sNe/kvB8dUvGyFVNy/wUcpBDiRvylUupCj2Y07y6yC0JXN3khCgh2GMBQWQ7Dhz5WIb2PQ==',key_name='tempest-TestNetworkBasicOps-2017683419',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:03:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-jq31848e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:03:03Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=16f34aac-788f-4079-9636-0db2c8de6422,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.349 230014 DEBUG nova.network.os_vif_util [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.349 230014 DEBUG nova.network.os_vif_util [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.350 230014 DEBUG os_vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.351 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.352 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99ae7646-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.353 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.357 230014 INFO os_vif [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=99ae7646-7560-4043-bead-b1665083257c,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99ae7646-75')#033[00m
Nov 24 05:03:27 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1-userdata-shm.mount: Deactivated successfully.
Nov 24 05:03:27 np0005533252 systemd[1]: var-lib-containers-storage-overlay-2695d8c7141ff441362d396fc0649dcdddbdcd12afc2cf9c7f37256879ce4706-merged.mount: Deactivated successfully.
Nov 24 05:03:27 np0005533252 podman[243067]: 2025-11-24 10:03:27.386543932 +0000 UTC m=+0.142737636 container cleanup 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:03:27 np0005533252 systemd[1]: libpod-conmon-2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1.scope: Deactivated successfully.
Nov 24 05:03:27 np0005533252 podman[243122]: 2025-11-24 10:03:27.480867291 +0000 UTC m=+0.074148677 container remove 2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.491 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[3eca8511-8d76-4a09-9eb7-ef4cee05a633]: (4, ('Mon Nov 24 10:03:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1)\n2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1\nMon Nov 24 10:03:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1)\n2204afbd1ff852b76f946ef3662389689dd4b421d38aa911c9d6ce2ddd80afc1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.493 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b0111ab3-2100-4e50-86cd-58d5c697d17d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.494 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:27 np0005533252 kernel: tapd9ce2622-50: left promiscuous mode
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.502 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.510 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.515 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a9828e-3856-414a-80da-2a2293330eb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.535 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[87174bf5-59e5-4fe3-bf6b-d6df3fe511c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.537 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ae3db0-d7db-4fa7-b75a-0cb0a35b17a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.558 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c574d512-a645-4194-9b43-65751a3f7371]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452562, 'reachable_time': 44442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243139, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.563 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 05:03:27 np0005533252 systemd[1]: run-netns-ovnmeta\x2dd9ce2622\x2d5822\x2d4ecf\x2d9fb9\x2df5f15c8ea094.mount: Deactivated successfully.
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.563 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[94fa03aa-a089-4ae5-bfb5-62ad6dcf86e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.878 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-changed-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.879 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing instance network info cache due to event network-changed-99ae7646-7560-4043-bead-b1665083257c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.879 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.880 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.880 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Refreshing network info cache for port 99ae7646-7560-4043-bead-b1665083257c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:03:27 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:03:27.902 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.914 230014 INFO nova.virt.libvirt.driver [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deleting instance files /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422_del#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.915 230014 INFO nova.virt.libvirt.driver [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deletion of /var/lib/nova/instances/16f34aac-788f-4079-9636-0db2c8de6422_del complete#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.966 230014 INFO nova.compute.manager [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.967 230014 DEBUG oslo.service.loopingcall [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.967 230014 DEBUG nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 05:03:27 np0005533252 nova_compute[230010]: 2025-11-24 10:03:27.968 230014 DEBUG nova.network.neutron [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.055 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.076 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.076 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.077 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:03:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.865 230014 DEBUG nova.network.neutron [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.881 230014 INFO nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.925 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.926 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:28 np0005533252 nova_compute[230010]: 2025-11-24 10:03:28.981 230014 DEBUG oslo_concurrency.processutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:03:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/270683702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.462 230014 DEBUG oslo_concurrency.processutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.470 230014 DEBUG nova.compute.provider_tree [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.483 230014 DEBUG nova.scheduler.client.report [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:03:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.508 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.535 230014 INFO nova.scheduler.client.report [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 16f34aac-788f-4079-9636-0db2c8de6422#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.557 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updated VIF entry in instance network info cache for port 99ae7646-7560-4043-bead-b1665083257c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.558 230014 DEBUG nova.network.neutron [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Updating instance_info_cache with network_info: [{"id": "99ae7646-7560-4043-bead-b1665083257c", "address": "fa:16:3e:30:f8:b9", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99ae7646-75", "ovs_interfaceid": "99ae7646-7560-4043-bead-b1665083257c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.594 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-16f34aac-788f-4079-9636-0db2c8de6422" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.595 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.596 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.596 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.597 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.597 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.598 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-unplugged-99ae7646-7560-4043-bead-b1665083257c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.598 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "16f34aac-788f-4079-9636-0db2c8de6422-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.599 230014 DEBUG oslo_concurrency.lockutils [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.600 230014 DEBUG nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] No waiting events found dispatching network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.600 230014 WARNING nova.compute.manager [req-26f1cc89-06e7-43ae-acdb-fc9c86da9468 req-254622b8-e566-4503-8895-cd116eac1653 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received unexpected event network-vif-plugged-99ae7646-7560-4043-bead-b1665083257c for instance with vm_state active and task_state deleting.#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.623 230014 DEBUG oslo_concurrency.lockutils [None req-0d78fac1-1bbd-42a4-ac82-95367a3d470a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "16f34aac-788f-4079-9636-0db2c8de6422" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.966 230014 DEBUG nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Received event network-vif-deleted-99ae7646-7560-4043-bead-b1665083257c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.967 230014 INFO nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Neutron deleted interface 99ae7646-7560-4043-bead-b1665083257c; detaching it from the instance and deleting it from the info cache#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.967 230014 DEBUG nova.network.neutron [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 24 05:03:29 np0005533252 nova_compute[230010]: 2025-11-24 10:03:29.969 230014 DEBUG nova.compute.manager [req-5e0ce72e-e6b6-4e2d-8dba-79de39ba2163 req-9b7a96dc-0c72-4e0c-a6f8-542b6513d36f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Detach interface failed, port_id=99ae7646-7560-4043-bead-b1665083257c, reason: Instance 16f34aac-788f-4079-9636-0db2c8de6422 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 24 05:03:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:03:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:03:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:31 np0005533252 nova_compute[230010]: 2025-11-24 10:03:31.204 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:31.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:32 np0005533252 nova_compute[230010]: 2025-11-24 10:03:32.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:33.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:35.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:36 np0005533252 nova_compute[230010]: 2025-11-24 10:03:36.207 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:37.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:37 np0005533252 nova_compute[230010]: 2025-11-24 10:03:37.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:37 np0005533252 nova_compute[230010]: 2025-11-24 10:03:37.914 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:38 np0005533252 nova_compute[230010]: 2025-11-24 10:03:38.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:39.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:41 np0005533252 nova_compute[230010]: 2025-11-24 10:03:41.209 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:42 np0005533252 nova_compute[230010]: 2025-11-24 10:03:42.338 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978607.3363385, 16f34aac-788f-4079-9636-0db2c8de6422 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:03:42 np0005533252 nova_compute[230010]: 2025-11-24 10:03:42.338 230014 INFO nova.compute.manager [-] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] VM Stopped (Lifecycle Event)#033[00m
Nov 24 05:03:42 np0005533252 nova_compute[230010]: 2025-11-24 10:03:42.360 230014 DEBUG nova.compute.manager [None req-84138f20-4e7d-4a51-9173-78eb5dc57c28 - - - - - -] [instance: 16f34aac-788f-4079-9636-0db2c8de6422] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:03:42 np0005533252 nova_compute[230010]: 2025-11-24 10:03:42.361 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.405230) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622405297, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2367, "num_deletes": 251, "total_data_size": 6277458, "memory_usage": 6371104, "flush_reason": "Manual Compaction"}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622424620, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4047955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31290, "largest_seqno": 33652, "table_properties": {"data_size": 4038373, "index_size": 6012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20157, "raw_average_key_size": 20, "raw_value_size": 4019152, "raw_average_value_size": 4088, "num_data_blocks": 258, "num_entries": 983, "num_filter_entries": 983, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978418, "oldest_key_time": 1763978418, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19418 microseconds, and 7829 cpu microseconds.
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.424660) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4047955 bytes OK
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.424682) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429072) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429094) EVENT_LOG_v1 {"time_micros": 1763978622429087, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.429116) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6266902, prev total WAL file size 6266902, number of live WAL files 2.
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3953KB)], [60(11MB)]
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622431162, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16211463, "oldest_snapshot_seqno": -1}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6259 keys, 14109730 bytes, temperature: kUnknown
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622534010, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14109730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14068685, "index_size": 24295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 160415, "raw_average_key_size": 25, "raw_value_size": 13956841, "raw_average_value_size": 2229, "num_data_blocks": 974, "num_entries": 6259, "num_filter_entries": 6259, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.534907) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14109730 bytes
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.538908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.4 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6779, records dropped: 520 output_compression: NoCompression
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.538931) EVENT_LOG_v1 {"time_micros": 1763978622538921, "job": 36, "event": "compaction_finished", "compaction_time_micros": 102977, "compaction_time_cpu_micros": 29717, "output_level": 6, "num_output_files": 1, "total_output_size": 14109730, "num_input_records": 6779, "num_output_records": 6259, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622540250, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622543851, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:03:42.543968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:03:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:44 np0005533252 podman[243200]: 2025-11-24 10:03:44.360641077 +0000 UTC m=+0.087875483 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 05:03:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:03:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:03:46 np0005533252 nova_compute[230010]: 2025-11-24 10:03:46.212 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:46.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:47 np0005533252 nova_compute[230010]: 2025-11-24 10:03:47.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:50 np0005533252 podman[243248]: 2025-11-24 10:03:50.371703155 +0000 UTC m=+0.110506947 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 05:03:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.213 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.570 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.654 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.655 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.661 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.662 230014 INFO nova.compute.claims [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 24 05:03:51 np0005533252 nova_compute[230010]: 2025-11-24 10:03:51.778 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:03:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3936690340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.185 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.191 230014 DEBUG nova.compute.provider_tree [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.203 230014 DEBUG nova.scheduler.client.report [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.222 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.224 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.273 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.274 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.303 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.321 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.364 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.406 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.407 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.408 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating image(s)#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.431 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.462 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.491 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.494 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.534 230014 DEBUG nova.policy [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.555 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.556 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.557 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.625 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.629 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 89909dc1-a7db-4cca-b837-5340532de97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:52 np0005533252 nova_compute[230010]: 2025-11-24 10:03:52.966 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 89909dc1-a7db-4cca-b837-5340532de97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.021 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.133 230014 DEBUG nova.objects.instance [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.146 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.147 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Ensure instance console log exists: /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.147 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.148 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.148 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:53 np0005533252 nova_compute[230010]: 2025-11-24 10:03:53.348 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Successfully created port: 891e7944-832b-408f-b645-6f51de733021 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 24 05:03:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.329 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Successfully updated port: 891e7944-832b-408f-b645-6f51de733021 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.342 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.425 230014 DEBUG nova.compute.manager [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.426 230014 DEBUG nova.compute.manager [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:03:54 np0005533252 nova_compute[230010]: 2025-11-24 10:03:54.426 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:03:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:03:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:03:55 np0005533252 nova_compute[230010]: 2025-11-24 10:03:55.509 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 05:03:56 np0005533252 nova_compute[230010]: 2025-11-24 10:03:56.214 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:03:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:03:57 np0005533252 podman[243465]: 2025-11-24 10:03:57.305670102 +0000 UTC m=+0.049769060 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:03:57 np0005533252 nova_compute[230010]: 2025-11-24 10:03:57.367 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.696 230014 DEBUG nova.network.neutron [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.711 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.711 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance network_info: |[{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.712 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.712 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.715 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start _get_guest_xml network_info=[{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.719 230014 WARNING nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.724 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.724 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.728 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.728 230014 DEBUG nova.virt.libvirt.host [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.729 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.730 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.731 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.732 230014 DEBUG nova.virt.hardware [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 05:03:58 np0005533252 nova_compute[230010]: 2025-11-24 10:03:58.734 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 05:03:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3514165054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.184 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.212 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.217 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:03:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:03:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:03:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:03:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:03:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 05:03:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3484212954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.700 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.703 230014 DEBUG nova.virt.libvirt.vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:03:52Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.703 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.705 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.706 230014 DEBUG nova.objects.instance [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.719 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] End _get_guest_xml xml=<domain type="kvm">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <uuid>89909dc1-a7db-4cca-b837-5340532de97b</uuid>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <name>instance-0000000d</name>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <memory>131072</memory>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <vcpu>1</vcpu>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <metadata>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:name>tempest-TestNetworkBasicOps-server-1741107609</nova:name>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:creationTime>2025-11-24 10:03:58</nova:creationTime>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:flavor name="m1.nano">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:memory>128</nova:memory>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:disk>1</nova:disk>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:swap>0</nova:swap>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:vcpus>1</nova:vcpus>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </nova:flavor>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:owner>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </nova:owner>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <nova:ports>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <nova:port uuid="891e7944-832b-408f-b645-6f51de733021">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        </nova:port>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </nova:ports>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </nova:instance>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </metadata>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <sysinfo type="smbios">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <system>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="manufacturer">RDO</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="product">OpenStack Compute</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="serial">89909dc1-a7db-4cca-b837-5340532de97b</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="uuid">89909dc1-a7db-4cca-b837-5340532de97b</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <entry name="family">Virtual Machine</entry>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </system>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </sysinfo>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <os>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <boot dev="hd"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <smbios mode="sysinfo"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </os>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <features>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <acpi/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <apic/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <vmcoreinfo/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </features>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <clock offset="utc">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <timer name="hpet" present="no"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </clock>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <cpu mode="host-model" match="exact">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </cpu>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  <devices>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <disk type="network" device="disk">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/89909dc1-a7db-4cca-b837-5340532de97b_disk">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <target dev="vda" bus="virtio"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <disk type="network" device="cdrom">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <driver type="raw" cache="none"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <source protocol="rbd" name="vms/89909dc1-a7db-4cca-b837-5340532de97b_disk.config">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.100" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.102" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <host name="192.168.122.101" port="6789"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </source>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <auth username="openstack">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:        <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      </auth>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <target dev="sda" bus="sata"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </disk>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <interface type="ethernet">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <mac address="fa:16:3e:a8:16:2d"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <driver name="vhost" rx_queue_size="512"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <mtu size="1442"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <target dev="tap891e7944-83"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </interface>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <serial type="pty">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <log file="/var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/console.log" append="off"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </serial>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <video>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <model type="virtio"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </video>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <input type="tablet" bus="usb"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <rng model="virtio">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <backend model="random">/dev/urandom</backend>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </rng>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <controller type="usb" index="0"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    <memballoon model="virtio">
Nov 24 05:03:59 np0005533252 nova_compute[230010]:      <stats period="10"/>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:    </memballoon>
Nov 24 05:03:59 np0005533252 nova_compute[230010]:  </devices>
Nov 24 05:03:59 np0005533252 nova_compute[230010]: </domain>
Nov 24 05:03:59 np0005533252 nova_compute[230010]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.721 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Preparing to wait for external event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.721 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.722 230014 DEBUG nova.virt.libvirt.vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:03:52Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.723 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.723 230014 DEBUG nova.network.os_vif_util [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.724 230014 DEBUG os_vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.725 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.725 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap891e7944-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.728 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap891e7944-83, col_values=(('external_ids', {'iface-id': '891e7944-832b-408f-b645-6f51de733021', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:16:2d', 'vm-uuid': '89909dc1-a7db-4cca-b837-5340532de97b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.730 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:59 np0005533252 NetworkManager[48870]: <info>  [1763978639.7308] manager: (tap891e7944-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.732 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.744 230014 INFO os_vif [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83')#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.792 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.793 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.793 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:a8:16:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.794 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Using config drive#033[00m
Nov 24 05:03:59 np0005533252 nova_compute[230010]: 2025-11-24 10:03:59.821 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:04:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:04:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:04:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.209 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Creating config drive at /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.215 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgzomxnnk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.233 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.342 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgzomxnnk" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.373 230014 DEBUG nova.storage.rbd_utils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 89909dc1-a7db-4cca-b837-5340532de97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 05:04:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.380 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config 89909dc1-a7db-4cca-b837-5340532de97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.550 230014 DEBUG oslo_concurrency.processutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config 89909dc1-a7db-4cca-b837-5340532de97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.551 230014 INFO nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deleting local config drive /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b/disk.config because it was imported into RBD.#033[00m
Nov 24 05:04:01 np0005533252 kernel: tap891e7944-83: entered promiscuous mode
Nov 24 05:04:01 np0005533252 NetworkManager[48870]: <info>  [1763978641.6151] manager: (tap891e7944-83): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 24 05:04:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:01Z|00096|binding|INFO|Claiming lport 891e7944-832b-408f-b645-6f51de733021 for this chassis.
Nov 24 05:04:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:01Z|00097|binding|INFO|891e7944-832b-408f-b645-6f51de733021: Claiming fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.616 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.624 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.641 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:16:2d 10.100.0.4'], port_security=['fa:16:3e:a8:16:2d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89909dc1-a7db-4cca-b837-5340532de97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22748050-40a9-4373-8c95-5da36c909edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dece4c3-fa7a-42ae-8b29-e0f3dfabd71c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72482cca-2f03-4eb7-ab95-968e79999420, chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=891e7944-832b-408f-b645-6f51de733021) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.642 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 891e7944-832b-408f-b645-6f51de733021 in datapath 22748050-40a9-4373-8c95-5da36c909edc bound to our chassis#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.643 142336 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22748050-40a9-4373-8c95-5da36c909edc#033[00m
Nov 24 05:04:01 np0005533252 systemd-machined[193537]: New machine qemu-6-instance-0000000d.
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.657 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e0072ee3-6398-4504-8b5c-093afdb230a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.659 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22748050-41 in ovnmeta-22748050-40a9-4373-8c95-5da36c909edc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.665 234803 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22748050-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.665 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6c146692-fdcd-4565-86ca-aaeb1e8bf414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.667 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6e09b760-01fd-46cc-9b00-3ef041e8081d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.684 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.685 230014 DEBUG nova.network.neutron [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.685 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[28c997e2-f2ca-4255-8832-200b93d06bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:01Z|00098|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 ovn-installed in OVS
Nov 24 05:04:01 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:01Z|00099|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 up in Southbound
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.690 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:01 np0005533252 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.703 230014 DEBUG oslo_concurrency.lockutils [req-e7981b58-a0dc-4ae8-aba3-16540414f4a0 req-1b964b4f-105a-4507-a7d8-17a2858bca47 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:04:01 np0005533252 systemd-udevd[243705]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.718 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[6608d886-dd11-4223-b074-bbe938401c56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 NetworkManager[48870]: <info>  [1763978641.7349] device (tap891e7944-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 05:04:01 np0005533252 NetworkManager[48870]: <info>  [1763978641.7366] device (tap891e7944-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.756 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[53757821-f549-4f53-95c7-647119c3e904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.763 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[ce205dea-ea10-4a5d-b678-a1153625aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 NetworkManager[48870]: <info>  [1763978641.7647] manager: (tap22748050-40): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.797 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[64589377-f914-4bdb-b541-e31ef8a0e557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.799 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[2c60f8f8-9a1d-4313-8d57-e4edb9f25952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 NetworkManager[48870]: <info>  [1763978641.8250] device (tap22748050-40): carrier: link connected
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.830 234819 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7149ea-abfe-4939-80fc-e00b05ee943d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.849 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[21e2b08c-77e8-4031-94f0-6c99a4c6dc01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22748050-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:0f:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458537, 'reachable_time': 18634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243735, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.864 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[b28fa3a8-fc9a-4463-b59f-a4dcd516532b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:f06'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458537, 'tstamp': 458537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243736, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.886 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[1a58ff42-afd6-48f7-b678-519cb12bbcf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22748050-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:0f:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458537, 'reachable_time': 18634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243737, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.914 230014 DEBUG nova.compute.manager [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.914 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.915 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.915 230014 DEBUG oslo_concurrency.lockutils [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:01 np0005533252 nova_compute[230010]: 2025-11-24 10:04:01.916 230014 DEBUG nova.compute.manager [req-4133837f-be31-4099-9855-37471ed8067f req-988c9255-8a34-427b-9f40-373dfdb0b360 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Processing event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.916 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1b53f9-1ba0-4799-a319-8f27b9a9bd16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:04:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:04:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:04:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.987 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[f2061f2f-8c44-43d9-8f2d-06634ab35870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.989 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22748050-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.989 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 24 05:04:01 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:01.990 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22748050-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:02 np0005533252 kernel: tap22748050-40: entered promiscuous mode
Nov 24 05:04:02 np0005533252 NetworkManager[48870]: <info>  [1763978642.0280] manager: (tap22748050-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.024 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.035 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22748050-40, col_values=(('external_ids', {'iface-id': 'c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:02 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:02Z|00100|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.038 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.039 142336 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.039 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[80743c0f-e055-4295-a48a-e345230442bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.041 142336 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: global
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    log         /dev/log local0 debug
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    log-tag     haproxy-metadata-proxy-22748050-40a9-4373-8c95-5da36c909edc
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    user        root
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    group       root
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    maxconn     1024
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    pidfile     /var/lib/neutron/external/pids/22748050-40a9-4373-8c95-5da36c909edc.pid.haproxy
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    daemon
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: defaults
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    log global
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    mode http
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    option httplog
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    option dontlognull
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    option http-server-close
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    option forwardfor
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    retries                 3
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    timeout http-request    30s
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    timeout connect         30s
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    timeout client          32s
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    timeout server          32s
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    timeout http-keep-alive 30s
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: listen listener
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    bind 169.254.169.254:80
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    server metadata /var/lib/neutron/metadata_proxy
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]:    http-request add-header X-OVN-Network-ID 22748050-40a9-4373-8c95-5da36c909edc
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 24 05:04:02 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:02.042 142336 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'env', 'PROCESS_TAG=haproxy-22748050-40a9-4373-8c95-5da36c909edc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22748050-40a9-4373-8c95-5da36c909edc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.248 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.24744, 89909dc1-a7db-4cca-b837-5340532de97b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.248 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Started (Lifecycle Event)#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.250 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.254 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.258 230014 INFO nova.virt.libvirt.driver [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance spawned successfully.#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.258 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.266 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.272 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.275 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.276 230014 DEBUG nova.virt.libvirt.driver [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.308 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.309 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.247672, 89909dc1-a7db-4cca-b837-5340532de97b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.309 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Paused (Lifecycle Event)#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.335 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.338 230014 DEBUG nova.virt.driver [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] Emitting event <LifecycleEvent: 1763978642.253599, 89909dc1-a7db-4cca-b837-5340532de97b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.339 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Resumed (Lifecycle Event)#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.347 230014 INFO nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 9.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.347 230014 DEBUG nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.355 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.358 230014 DEBUG nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.383 230014 INFO nova.compute.manager [None req-7781f489-90a3-45f2-98f2-bf7ccb5c1239 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.411 230014 INFO nova.compute.manager [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 10.80 seconds to build instance.#033[00m
Nov 24 05:04:02 np0005533252 podman[243812]: 2025-11-24 10:04:02.416073536 +0000 UTC m=+0.048642772 container create 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:04:02 np0005533252 nova_compute[230010]: 2025-11-24 10:04:02.426 230014 DEBUG oslo_concurrency.lockutils [None req-5d5b5cfb-a9b5-427c-91c6-3b86a939f388 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:02 np0005533252 systemd[1]: Started libpod-conmon-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope.
Nov 24 05:04:02 np0005533252 systemd[1]: Started libcrun container.
Nov 24 05:04:02 np0005533252 podman[243812]: 2025-11-24 10:04:02.389364391 +0000 UTC m=+0.021933667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 05:04:02 np0005533252 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce70e599b070ad1e348a0dc736d83aebd11dabe2621d209a81daa70e66a1ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 05:04:02 np0005533252 podman[243812]: 2025-11-24 10:04:02.50036473 +0000 UTC m=+0.132934006 container init 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 05:04:02 np0005533252 podman[243812]: 2025-11-24 10:04:02.506308805 +0000 UTC m=+0.138878051 container start 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:04:02 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : New worker (243833) forked
Nov 24 05:04:02 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : Loading success.
Nov 24 05:04:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:04:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:04:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:03 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:04:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.997 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 DEBUG oslo_concurrency.lockutils [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 DEBUG nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:04:03 np0005533252 nova_compute[230010]: 2025-11-24 10:04:03.998 230014 WARNING nova.compute.manager [req-44d328e2-9de2-4aa2-a12f-56f771c4c26d req-c64d57e2-3c0b-454e-b85d-6bf7e2392d50 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received unexpected event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 for instance with vm_state active and task_state None.#033[00m
Nov 24 05:04:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:04 np0005533252 nova_compute[230010]: 2025-11-24 10:04:04.731 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:04.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:05.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:05 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:05Z|00101|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:05 np0005533252 NetworkManager[48870]: <info>  [1763978645.9818] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 24 05:04:05 np0005533252 nova_compute[230010]: 2025-11-24 10:04:05.982 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:05 np0005533252 NetworkManager[48870]: <info>  [1763978645.9831] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 24 05:04:06 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:06Z|00102|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:06 np0005533252 nova_compute[230010]: 2025-11-24 10:04:06.031 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:06 np0005533252 nova_compute[230010]: 2025-11-24 10:04:06.035 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:06 np0005533252 nova_compute[230010]: 2025-11-24 10:04:06.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:07 np0005533252 nova_compute[230010]: 2025-11-24 10:04:07.015 230014 DEBUG nova.compute.manager [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:07 np0005533252 nova_compute[230010]: 2025-11-24 10:04:07.015 230014 DEBUG nova.compute.manager [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:04:07 np0005533252 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:04:07 np0005533252 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:04:07 np0005533252 nova_compute[230010]: 2025-11-24 10:04:07.016 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:04:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:07.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:04:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:04:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:08 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:04:08 np0005533252 nova_compute[230010]: 2025-11-24 10:04:08.723 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:04:08 np0005533252 nova_compute[230010]: 2025-11-24 10:04:08.724 230014 DEBUG nova.network.neutron [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:04:08 np0005533252 nova_compute[230010]: 2025-11-24 10:04:08.738 230014 DEBUG oslo_concurrency.lockutils [req-cdd32b15-607b-42c2-bbf6-2ba39a51950a req-741c86e8-cb6b-4ee2-af48-064f18e82056 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:04:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:09 np0005533252 nova_compute[230010]: 2025-11-24 10:04:09.763 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:11 np0005533252 nova_compute[230010]: 2025-11-24 10:04:11.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:14 np0005533252 nova_compute[230010]: 2025-11-24 10:04:14.766 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:15 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:15Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 05:04:15 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:15Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:16:2d 10.100.0.4
Nov 24 05:04:15 np0005533252 podman[243900]: 2025-11-24 10:04:15.31607729 +0000 UTC m=+0.051906312 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 24 05:04:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:04:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:04:16 np0005533252 nova_compute[230010]: 2025-11-24 10:04:16.223 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:16.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:18 np0005533252 nova_compute[230010]: 2025-11-24 10:04:18.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:18.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:19.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:19 np0005533252 nova_compute[230010]: 2025-11-24 10:04:19.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.068 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.068 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:20 np0005533252 nova_compute[230010]: 2025-11-24 10:04:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 05:04:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 05:04:21 np0005533252 nova_compute[230010]: 2025-11-24 10:04:21.225 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:21 np0005533252 podman[243923]: 2025-11-24 10:04:21.337205927 +0000 UTC m=+0.080074352 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 05:04:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:21.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:21 np0005533252 nova_compute[230010]: 2025-11-24 10:04:21.971 230014 INFO nova.compute.manager [None req-7a0e4815-6f13-4663-834a-22e332dc32d2 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output#033[00m
Nov 24 05:04:21 np0005533252 nova_compute[230010]: 2025-11-24 10:04:21.976 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 05:04:22 np0005533252 nova_compute[230010]: 2025-11-24 10:04:22.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:22 np0005533252 nova_compute[230010]: 2025-11-24 10:04:22.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:04:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:22.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:23 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:23Z|00103|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:23 np0005533252 nova_compute[230010]: 2025-11-24 10:04:23.227 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:23 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:23Z|00104|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:23 np0005533252 nova_compute[230010]: 2025-11-24 10:04:23.284 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:24 np0005533252 nova_compute[230010]: 2025-11-24 10:04:24.761 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:24 np0005533252 nova_compute[230010]: 2025-11-24 10:04:24.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:24 np0005533252 nova_compute[230010]: 2025-11-24 10:04:24.770 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.457 230014 INFO nova.compute.manager [None req-e959c7ef-2599-45c9-a479-cadbbe82c683 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.461 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:04:25 np0005533252 nova_compute[230010]: 2025-11-24 10:04:25.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:04:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:04:26 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3436723982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.200 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:04:26 np0005533252 systemd[1]: Starting dnf makecache...
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.229 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.268 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.269 230014 DEBUG nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.401 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.402 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4732MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.403 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.403 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:26 np0005533252 dnf[243977]: Metadata cache refreshed recently.
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.480 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Instance 89909dc1-a7db-4cca-b837-5340532de97b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.481 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.481 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.504 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 05:04:26 np0005533252 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 05:04:26 np0005533252 systemd[1]: Finished dnf makecache.
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.520 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.520 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.537 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.558 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 05:04:26 np0005533252 nova_compute[230010]: 2025-11-24 10:04:26.591 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:04:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:04:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3125762373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:04:27 np0005533252 nova_compute[230010]: 2025-11-24 10:04:27.041 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:04:27 np0005533252 nova_compute[230010]: 2025-11-24 10:04:27.048 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:04:27 np0005533252 nova_compute[230010]: 2025-11-24 10:04:27.061 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:04:27 np0005533252 nova_compute[230010]: 2025-11-24 10:04:27.087 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:04:27 np0005533252 nova_compute[230010]: 2025-11-24 10:04:27.088 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:28 np0005533252 podman[244001]: 2025-11-24 10:04:28.316543196 +0000 UTC m=+0.051676076 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:04:28 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:28Z|00105|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:28 np0005533252 NetworkManager[48870]: <info>  [1763978668.6329] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 24 05:04:28 np0005533252 NetworkManager[48870]: <info>  [1763978668.6335] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 24 05:04:28 np0005533252 nova_compute[230010]: 2025-11-24 10:04:28.632 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:28 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:28Z|00106|binding|INFO|Releasing lport c9e1a544-3313-45b9-9f1e-5b8ba7d7cc61 from this chassis (sb_readonly=0)
Nov 24 05:04:28 np0005533252 nova_compute[230010]: 2025-11-24 10:04:28.637 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:28 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:28.897 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:04:28 np0005533252 nova_compute[230010]: 2025-11-24 10:04:28.898 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:28 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:28.898 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 05:04:28 np0005533252 nova_compute[230010]: 2025-11-24 10:04:28.963 230014 INFO nova.compute.manager [None req-3ecc1688-f715-4769-810a-8eb0cb0ced3f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Get console output#033[00m
Nov 24 05:04:28 np0005533252 nova_compute[230010]: 2025-11-24 10:04:28.968 236028 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.090 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.091 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.091 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.333 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.334 230014 DEBUG nova.objects.instance [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:04:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:29.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:29 np0005533252 nova_compute[230010]: 2025-11-24 10:04:29.796 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:04:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.798 230014 DEBUG nova.compute.manager [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-changed-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.798 230014 DEBUG nova.compute.manager [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing instance network info cache due to event network-changed-891e7944-832b-408f-b645-6f51de733021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.799 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 05:04:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.868 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.869 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.870 230014 INFO nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Terminating instance#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.871 230014 DEBUG nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 05:04:30 np0005533252 kernel: tap891e7944-83 (unregistering): left promiscuous mode
Nov 24 05:04:30 np0005533252 NetworkManager[48870]: <info>  [1763978670.9219] device (tap891e7944-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 05:04:30 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:30Z|00107|binding|INFO|Releasing lport 891e7944-832b-408f-b645-6f51de733021 from this chassis (sb_readonly=0)
Nov 24 05:04:30 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:30Z|00108|binding|INFO|Setting lport 891e7944-832b-408f-b645-6f51de733021 down in Southbound
Nov 24 05:04:30 np0005533252 ovn_controller[132966]: 2025-11-24T10:04:30Z|00109|binding|INFO|Removing iface tap891e7944-83 ovn-installed in OVS
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.967 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:30 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.978 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:16:2d 10.100.0.4'], port_security=['fa:16:3e:a8:16:2d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89909dc1-a7db-4cca-b837-5340532de97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22748050-40a9-4373-8c95-5da36c909edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dece4c3-fa7a-42ae-8b29-e0f3dfabd71c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72482cca-2f03-4eb7-ab95-968e79999420, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>], logical_port=891e7944-832b-408f-b645-6f51de733021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f5c78678ac0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:04:30 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.979 142336 INFO neutron.agent.ovn.metadata.agent [-] Port 891e7944-832b-408f-b645-6f51de733021 in datapath 22748050-40a9-4373-8c95-5da36c909edc unbound from our chassis#033[00m
Nov 24 05:04:30 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.980 142336 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22748050-40a9-4373-8c95-5da36c909edc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 24 05:04:30 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.982 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[9da7dc2f-a60d-4de0-a9af-c316969ef6bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:30 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:30.983 142336 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22748050-40a9-4373-8c95-5da36c909edc namespace which is not needed anymore#033[00m
Nov 24 05:04:30 np0005533252 nova_compute[230010]: 2025-11-24 10:04:30.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 24 05:04:31 np0005533252 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 13.772s CPU time.
Nov 24 05:04:31 np0005533252 systemd-machined[193537]: Machine qemu-6-instance-0000000d terminated.
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.103 230014 INFO nova.virt.libvirt.driver [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Instance destroyed successfully.#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.103 230014 DEBUG nova.objects.instance [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 89909dc1-a7db-4cca-b837-5340532de97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : haproxy version is 2.8.14-c23fe91
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [NOTICE]   (243831) : path to executable is /usr/sbin/haproxy
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : Exiting Master process...
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : Exiting Master process...
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.116 230014 DEBUG nova.virt.libvirt.vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:03:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1741107609',display_name='tempest-TestNetworkBasicOps-server-1741107609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1741107609',id=13,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGYO1n2WM+59O3PRTf5fCo1d78/BH3Mc8BBXRdPASueO+JvuIAgEpEuVwsO0rsx8rIXsxHGWMhGFwwjbkrft3uNRj4gBBGDnbQiVDk9hyHkutBhfgKKfMw5qeDHykomezA==',key_name='tempest-TestNetworkBasicOps-1685206173',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:04:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-pxhddr0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:04:02Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=89909dc1-a7db-4cca-b837-5340532de97b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [ALERT]    (243831) : Current worker (243833) exited with code 143 (Terminated)
Nov 24 05:04:31 np0005533252 neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc[243827]: [WARNING]  (243831) : All workers exited. Exiting... (0)
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.117 230014 DEBUG nova.network.os_vif_util [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.118 230014 DEBUG nova.network.os_vif_util [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.119 230014 DEBUG os_vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 24 05:04:31 np0005533252 systemd[1]: libpod-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope: Deactivated successfully.
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.121 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.121 230014 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap891e7944-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.123 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 podman[244070]: 2025-11-24 10:04:31.12675427 +0000 UTC m=+0.046806687 container died 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.127 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.129 230014 INFO os_vif [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:16:2d,bridge_name='br-int',has_traffic_filtering=True,id=891e7944-832b-408f-b645-6f51de733021,network=Network(22748050-40a9-4373-8c95-5da36c909edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891e7944-83')#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.148 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.148 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG oslo_concurrency.lockutils [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.149 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.150 230014 DEBUG nova.compute.manager [req-9bd28c05-1dcf-4cc5-8150-f9fca2fabe94 req-9b3aba1e-5237-45ae-b54e-120cf64232cd 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-unplugged-891e7944-832b-408f-b645-6f51de733021 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 24 05:04:31 np0005533252 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2-userdata-shm.mount: Deactivated successfully.
Nov 24 05:04:31 np0005533252 systemd[1]: var-lib-containers-storage-overlay-19ce70e599b070ad1e348a0dc736d83aebd11dabe2621d209a81daa70e66a1ce-merged.mount: Deactivated successfully.
Nov 24 05:04:31 np0005533252 podman[244070]: 2025-11-24 10:04:31.175190806 +0000 UTC m=+0.095243203 container cleanup 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 05:04:31 np0005533252 systemd[1]: libpod-conmon-886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2.scope: Deactivated successfully.
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.229 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 podman[244132]: 2025-11-24 10:04:31.243769705 +0000 UTC m=+0.043103656 container remove 886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.249 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[e74213ae-93d5-4dde-b45e-0116d185efe2]: (4, ('Mon Nov 24 10:04:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc (886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2)\n886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2\nMon Nov 24 10:04:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22748050-40a9-4373-8c95-5da36c909edc (886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2)\n886a55b088e85bc6370f29fe93e76d5fbf84307c17973cfc947291698efb33b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.251 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c94712ac-c6a5-4fbb-8b79-42f5c5e7b6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.252 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22748050-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:31 np0005533252 kernel: tap22748050-40: left promiscuous mode
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.255 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.275 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.275 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[c5df8033-b70a-4f39-a3a2-b7e3deec0db9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.294 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[13c8dfee-6e9c-456e-b6d4-dae604f6fb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.295 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[d56784b5-a2f0-4d82-ba2c-40ce6e4ec5b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.313 234803 DEBUG oslo.privsep.daemon [-] privsep: reply[17d43f6e-d6b5-46d2-b5d2-870e21a49dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458530, 'reachable_time': 36289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244147, 'error': None, 'target': 'ovnmeta-22748050-40a9-4373-8c95-5da36c909edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.317 142476 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22748050-40a9-4373-8c95-5da36c909edc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 24 05:04:31 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:31.317 142476 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2d138a-1168-4ce7-ad3f-5d5681f1f30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 05:04:31 np0005533252 systemd[1]: run-netns-ovnmeta\x2d22748050\x2d40a9\x2d4373\x2d8c95\x2d5da36c909edc.mount: Deactivated successfully.
Nov 24 05:04:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.562 230014 INFO nova.virt.libvirt.driver [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deleting instance files /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b_del#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.563 230014 INFO nova.virt.libvirt.driver [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deletion of /var/lib/nova/instances/89909dc1-a7db-4cca-b837-5340532de97b_del complete#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.606 230014 INFO nova.compute.manager [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.607 230014 DEBUG oslo.service.loopingcall [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.608 230014 DEBUG nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 05:04:31 np0005533252 nova_compute[230010]: 2025-11-24 10:04:31.608 230014 DEBUG nova.network.neutron [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.138 230014 DEBUG nova.network.neutron [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.160 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.161 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Refreshing network info cache for port 891e7944-832b-408f-b645-6f51de733021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.162 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.163 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.560 230014 DEBUG nova.network.neutron [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.574 230014 INFO nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Took 0.97 seconds to deallocate network for instance.#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.630 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.630 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.678 230014 DEBUG oslo_concurrency.processutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:04:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:32 np0005533252 nova_compute[230010]: 2025-11-24 10:04:32.863 230014 DEBUG nova.compute.manager [req-564521e6-1234-42df-8be5-a41f17fd9be6 req-ad155b29-92b1-4627-a276-30d711280658 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-deleted-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:04:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1074205317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.095 230014 DEBUG oslo_concurrency.processutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.101 230014 DEBUG nova.compute.provider_tree [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.115 230014 DEBUG nova.scheduler.client.report [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.131 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.156 230014 INFO nova.scheduler.client.report [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 89909dc1-a7db-4cca-b837-5340532de97b#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.219 230014 DEBUG oslo_concurrency.lockutils [None req-789a9ffd-5d3c-425a-89c9-4ccb0f324ce9 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.229 230014 DEBUG nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.229 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "89909dc1-a7db-4cca-b837-5340532de97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG oslo_concurrency.lockutils [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "89909dc1-a7db-4cca-b837-5340532de97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.230 230014 DEBUG nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] No waiting events found dispatching network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.231 230014 WARNING nova.compute.manager [req-9c677dfe-8c27-480e-9edb-46adbc8b5ab1 req-b6442491-89f0-45d6-822c-e5e36b026280 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Received unexpected event network-vif-plugged-891e7944-832b-408f-b645-6f51de733021 for instance with vm_state deleted and task_state None.#033[00m
Nov 24 05:04:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.572 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updated VIF entry in instance network info cache for port 891e7944-832b-408f-b645-6f51de733021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.572 230014 DEBUG nova.network.neutron [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Updating instance_info_cache with network_info: [{"id": "891e7944-832b-408f-b645-6f51de733021", "address": "fa:16:3e:a8:16:2d", "network": {"id": "22748050-40a9-4373-8c95-5da36c909edc", "bridge": "br-int", "label": "tempest-network-smoke--696376811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891e7944-83", "ovs_interfaceid": "891e7944-832b-408f-b645-6f51de733021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 05:04:33 np0005533252 nova_compute[230010]: 2025-11-24 10:04:33.585 230014 DEBUG oslo_concurrency.lockutils [req-620a44d4-6dd9-4cdd-9131-afcba1bf7835 req-d5b13bbf-bb33-4f29-8cb8-18755ebf8350 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-89909dc1-a7db-4cca-b837-5340532de97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 05:04:33 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:04:33.900 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:04:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:36 np0005533252 nova_compute[230010]: 2025-11-24 10:04:36.125 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:36 np0005533252 nova_compute[230010]: 2025-11-24 10:04:36.233 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:36 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 05:04:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:36.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:36 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 05:04:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:37 np0005533252 nova_compute[230010]: 2025-11-24 10:04:37.696 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:37 np0005533252 nova_compute[230010]: 2025-11-24 10:04:37.768 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:38.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:40.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:41 np0005533252 nova_compute[230010]: 2025-11-24 10:04:41.130 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:41 np0005533252 nova_compute[230010]: 2025-11-24 10:04:41.268 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:41.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:42.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:43.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:04:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:04:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:45.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:46 np0005533252 nova_compute[230010]: 2025-11-24 10:04:46.103 230014 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978671.1009011, 89909dc1-a7db-4cca-b837-5340532de97b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 05:04:46 np0005533252 nova_compute[230010]: 2025-11-24 10:04:46.103 230014 INFO nova.compute.manager [-] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] VM Stopped (Lifecycle Event)#033[00m
Nov 24 05:04:46 np0005533252 nova_compute[230010]: 2025-11-24 10:04:46.117 230014 DEBUG nova.compute.manager [None req-586f4402-9266-4b2a-9077-f3e71a877705 - - - - - -] [instance: 89909dc1-a7db-4cca-b837-5340532de97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 05:04:46 np0005533252 nova_compute[230010]: 2025-11-24 10:04:46.132 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:46 np0005533252 nova_compute[230010]: 2025-11-24 10:04:46.270 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:46 np0005533252 podman[244181]: 2025-11-24 10:04:46.326774272 +0000 UTC m=+0.059798595 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 05:04:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:47.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:51 np0005533252 nova_compute[230010]: 2025-11-24 10:04:51.136 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:51 np0005533252 nova_compute[230010]: 2025-11-24 10:04:51.271 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:51.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:52 np0005533252 podman[244230]: 2025-11-24 10:04:52.336174352 +0000 UTC m=+0.076582847 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 05:04:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:04:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:56 np0005533252 nova_compute[230010]: 2025-11-24 10:04:56.137 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:56 np0005533252 nova_compute[230010]: 2025-11-24 10:04:56.273 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:04:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:56.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:04:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:04:59 np0005533252 podman[244259]: 2025-11-24 10:04:59.374827684 +0000 UTC m=+0.080095662 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 05:04:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:04:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:04:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:04:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:05:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:05:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:01 np0005533252 nova_compute[230010]: 2025-11-24 10:05:01.178 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:01 np0005533252 nova_compute[230010]: 2025-11-24 10:05:01.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:05:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:05:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:05:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:05:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:02.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:04.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:06 np0005533252 nova_compute[230010]: 2025-11-24 10:05:06.180 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:06 np0005533252 nova_compute[230010]: 2025-11-24 10:05:06.278 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:06.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:05:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:05:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:08.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:09 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:05:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:05:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:05:09 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:05:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:09 np0005533252 ovn_controller[132966]: 2025-11-24T10:05:09Z|00110|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 24 05:05:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:10.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:11 np0005533252 nova_compute[230010]: 2025-11-24 10:05:11.184 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:11 np0005533252 nova_compute[230010]: 2025-11-24 10:05:11.281 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:12.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:05:13 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:05:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:14 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:05:14 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:05:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:05:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:05:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:16 np0005533252 nova_compute[230010]: 2025-11-24 10:05:16.188 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:16 np0005533252 nova_compute[230010]: 2025-11-24 10:05:16.283 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:16.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:17 np0005533252 podman[244424]: 2025-11-24 10:05:17.344338041 +0000 UTC m=+0.075687864 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:05:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:05:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:05:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:05:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:05:20 np0005533252 nova_compute[230010]: 2025-11-24 10:05:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:20 np0005533252 nova_compute[230010]: 2025-11-24 10:05:20.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:21 np0005533252 nova_compute[230010]: 2025-11-24 10:05:21.192 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:21 np0005533252 nova_compute[230010]: 2025-11-24 10:05:21.286 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:23 np0005533252 podman[244448]: 2025-11-24 10:05:23.327360395 +0000 UTC m=+0.069999980 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:05:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:23.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:23 np0005533252 nova_compute[230010]: 2025-11-24 10:05:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:23 np0005533252 nova_compute[230010]: 2025-11-24 10:05:23.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:05:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:24 np0005533252 nova_compute[230010]: 2025-11-24 10:05:24.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:25.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:26 np0005533252 nova_compute[230010]: 2025-11-24 10:05:26.195 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:26 np0005533252 nova_compute[230010]: 2025-11-24 10:05:26.287 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:26 np0005533252 nova_compute[230010]: 2025-11-24 10:05:26.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:27.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.776 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:05:27 np0005533252 nova_compute[230010]: 2025-11-24 10:05:27.795 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:05:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:05:28 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/479419961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.250 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.415 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.416 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.416 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.417 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.511 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:05:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:05:28 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1358056965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.971 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.976 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:05:28 np0005533252 nova_compute[230010]: 2025-11-24 10:05:28.988 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:05:29 np0005533252 nova_compute[230010]: 2025-11-24 10:05:29.036 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:05:29 np0005533252 nova_compute[230010]: 2025-11-24 10:05:29.036 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:05:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:29.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.025 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.026 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.026 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.042 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.042 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:30 np0005533252 nova_compute[230010]: 2025-11-24 10:05:30.043 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:05:30 np0005533252 podman[244548]: 2025-11-24 10:05:30.33336738 +0000 UTC m=+0.074526752 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 05:05:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:05:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:05:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:31 np0005533252 nova_compute[230010]: 2025-11-24 10:05:31.199 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:31 np0005533252 nova_compute[230010]: 2025-11-24 10:05:31.288 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:31.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:36 np0005533252 nova_compute[230010]: 2025-11-24 10:05:36.214 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:36 np0005533252 nova_compute[230010]: 2025-11-24 10:05:36.290 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:39.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:41 np0005533252 nova_compute[230010]: 2025-11-24 10:05:41.219 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:41 np0005533252 nova_compute[230010]: 2025-11-24 10:05:41.294 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:41 np0005533252 systemd-logind[823]: New session 55 of user zuul.
Nov 24 05:05:42 np0005533252 systemd[1]: Started Session 55 of User zuul.
Nov 24 05:05:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:42.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:05:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:05:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 05:05:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2646850891' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 05:05:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:46 np0005533252 nova_compute[230010]: 2025-11-24 10:05:46.221 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:46 np0005533252 nova_compute[230010]: 2025-11-24 10:05:46.296 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:47 np0005533252 podman[244885]: 2025-11-24 10:05:47.766364281 +0000 UTC m=+0.062377992 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 05:05:48 np0005533252 ovs-vsctl[244933]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 05:05:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:49 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 05:05:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:49 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 05:05:49 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: cache status {prefix=cache status} (starting...)
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: client ls {prefix=client ls} (starting...)
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:50 np0005533252 lvm[245310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 05:05:50 np0005533252 lvm[245310]: VG ceph_vg0 finished
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 05:05:50 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/805920003' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 nova_compute[230010]: 2025-11-24 10:05:51.227 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 nova_compute[230010]: 2025-11-24 10:05:51.297 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467161652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 05:05:51 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2099222001' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 05:05:51 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:52 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: ops {prefix=ops} (starting...)
Nov 24 05:05:52 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1395852554' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2236574839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 05:05:52 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: session ls {prefix=session ls} (starting...)
Nov 24 05:05:52 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 05:05:52 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/464524726' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 05:05:52 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: status {prefix=status} (starting...)
Nov 24 05:05:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2385770739' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 05:05:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194257639' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 05:05:53 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832786423' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2100678030' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1728751067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 05:05:54 np0005533252 podman[245849]: 2025-11-24 10:05:54.362584083 +0000 UTC m=+0.091883290 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555652975' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/422130285' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 05:05:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/833142917' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 05:05:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4157313813' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1743021929' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 05:05:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:05:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 05:05:55 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2180658346' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 05:05:56 np0005533252 nova_compute[230010]: 2025-11-24 10:05:56.231 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 05:05:56 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/419866527' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 05:05:56 np0005533252 nova_compute[230010]: 2025-11-24 10:05:56.300 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1032192 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1032192 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1024000 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1015808 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1015808 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1007616 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 999424 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 991232 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 983040 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 974848 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 966656 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 958464 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 950272 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 933888 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 925696 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 925696 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 917504 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 917504 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 909312 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 901120 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 901120 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 892928 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 892928 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 884736 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 876544 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 868352 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 860160 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 851968 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bd23a800 session 0x5634bf08bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 843776 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 835584 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 835584 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 827392 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 819200 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 819200 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 811008 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950515 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 802816 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 101.126350403s of 101.215919495s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 794624 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 786432 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 778240 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 770048 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 770048 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 761856 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 753664 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 745472 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 737280 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 737280 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 729088 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 720896 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0de00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 704512 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 696320 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 688128 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 679936 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 679936 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 671744 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949333 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 663552 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 663552 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.788032532s of 37.799560547s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 647168 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 647168 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 638976 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 630784 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 622592 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 614400 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 614400 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 606208 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 598016 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 589824 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 589824 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 581632 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 581632 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 573440 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 565248 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 565248 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 557056 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 548864 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 540672 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 540672 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 532480 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 524288 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 516096 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 516096 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 507904 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 507904 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 499712 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe2e000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 491520 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 483328 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 483328 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 475136 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 475136 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 466944 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 466944 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 458752 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 450560 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 442368 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 434176 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 434176 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952357 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 425984 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.314422607s of 67.320625305s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951766 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 417792 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 409600 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 409600 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 401408 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 393216 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 385024 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6424 writes, 26K keys, 6424 commit groups, 1.0 writes per commit group, ingest: 19.84 MB, 0.03 MB/s#012Interval WAL: 6424 writes, 1124 syncs, 5.72 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 327680 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 319488 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 311296 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 311296 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 303104 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 294912 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 286720 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 278528 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 270336 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 270336 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 262144 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 245760 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 245760 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 237568 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 237568 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 229376 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 221184 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 221184 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 212992 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 212992 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 204800 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 196608 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 196608 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 188416 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 172032 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 172032 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 163840 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 155648 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 155648 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 147456 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 139264 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 131072 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 131072 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 122880 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 114688 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 106496 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 98304 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 90112 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 90112 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 81920 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 81920 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 73728 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 65536 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 65536 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 57344 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 86.433448792s of 86.441337585s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951262 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 40960 heap: 75538432 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 1032192 heap: 77635584 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1769472 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1761280 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1761280 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bd23a800 session 0x5634bf1daf00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951175 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1753088 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1744896 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.445825577s of 24.319128036s, submitted: 234
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1728512 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1720320 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1720320 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1712128 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1703936 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1703936 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1687552 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1679360 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 1671168 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 1662976 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1654784 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1654784 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952687 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1646592 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.313213348s of 23.316928864s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1638400 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bfcb3a40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1630208 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1622016 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1613824 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1605632 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952096 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 141.261001587s of 141.271926880s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1597440 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bf4563c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953608 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.565235138s of 44.568458557s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1589248 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955120 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955120 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1581056 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1572864 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956632 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1564672 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.729745865s of 14.739072800s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 516096 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79224832 unmapped: 507904 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bd461c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 499712 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956041 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.088268280s of 39.090908051s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957553 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957553 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.436481476s of 12.442902565s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958474 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 491520 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 475136 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634bef0c5a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634bf225e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.697479248s of 56.708225250s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960907 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe22d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.040000916s of 48.052692413s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959725 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0cb40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.977294922s of 56.983356476s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c029ba40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.226074219s of 36.229869843s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bd20be00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.313743591s of 10.320786476s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf727000 session 0x5634c0304f00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823747635s of 10.826331139s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365579605s of 10.368579865s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964000 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc400 session 0x5634bd030780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.830619812s of 23.935253143s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966433 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000038s
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.325916290s of 47.341407776s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 49152 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,1])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1843200 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.360378265s of 33.329280853s, submitted: 257
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634c0305c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.545049667s of 30.567733765s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968275 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969787 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074189186s of 12.084068298s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1523712 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.175689697s of 27.178615570s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970708 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634c029b0e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.582208633s of 30.591884613s, submitted: 3
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971038 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.724964142s of 39.735435486s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634c055af00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.969955444s of 88.974121094s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c06b1680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1343488 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.761932373s of 24.764957428s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 294912 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975134 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x179901/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 16998400 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc1e4000/0x0/0x4ffc00000, data 0x97ba51/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 153 ms_handle_reset con 0x5634be106400 session 0x5634bf7bed20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 16867328 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 16834560 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 154 ms_handle_reset con 0x5634be106800 session 0x5634c06e63c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077664 data_alloc: 218103808 data_used: 151552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.009191513s of 16.191659927s, submitted: 44
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c0304b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf539c00 session 0x5634bf53bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106400 session 0x5634c06b2780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 16785408 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106800 session 0x5634bfe22000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be107400 session 0x5634c06b0960
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 5079040 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c06b01e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108216 data_alloc: 234881024 data_used: 11628544
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 5087232 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.016056061s of 35.019523621s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 5062656 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfdc25a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfd5ed20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106400 session 0x5634bf53ba40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106800 session 0x5634bf08a3c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be107400 session 0x5634c06cd860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151181 data_alloc: 234881024 data_used: 11628544
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06e6000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 5480448 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172443 data_alloc: 234881024 data_used: 14356480
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 2924544 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184603 data_alloc: 234881024 data_used: 16195584
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184755 data_alloc: 234881024 data_used: 16199680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.278177261s of 20.407505035s, submitted: 45
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 2703360 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263495 data_alloc: 234881024 data_used: 18071552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0x19e5e8a/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256551 data_alloc: 234881024 data_used: 18071552
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257615 data_alloc: 234881024 data_used: 18145280
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.662779808s of 14.812747955s, submitted: 82
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257839 data_alloc: 234881024 data_used: 18145280
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fcf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 3088384 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2f00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfcb32c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 3104768 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b23c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688000 session 0x5634bf53ab40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257991 data_alloc: 234881024 data_used: 18673664
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 1818624 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634be148780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993793488s of 10.001093864s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfceb40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcbf9c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd20a780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688400 session 0x5634be1adc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfd5f0e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f977a000/0x0/0x4ffc00000, data 0x1e2ee8a/0x1ef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c41e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bd4cfe00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298504 data_alloc: 234881024 data_used: 18673664
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 8290304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7be1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688800 session 0x5634bf4421e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 8970240 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 8708096 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.279289246s of 18.403636932s, submitted: 31
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 4358144 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389915 data_alloc: 234881024 data_used: 22224896
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f55000/0x0/0x4ffc00000, data 0x264ae99/0x270f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 4030464 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3948544 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f4a000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3940352 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390001 data_alloc: 234881024 data_used: 22290432
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f52000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634c02cbe00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 7290880 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18c780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270311 data_alloc: 234881024 data_used: 18673664
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.905948639s of 14.233164787s, submitted: 109
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b3e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be106400 session 0x5634bcfe1c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 7520256 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfe2f680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf443860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf443a40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf442d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462f00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.204719543s of 34.300907135s, submitted: 31
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107069440 unmapped: 23027712 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bcfce5a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf4a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06d7e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf7c0d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225465 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bf7c1680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c0000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c03c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 23289856 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 23683072 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 18817024 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.452789307s of 16.602790833s, submitted: 56
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 13295616 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373192 data_alloc: 234881024 data_used: 22908928
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373648 data_alloc: 234881024 data_used: 22921216
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 10559488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367704 data_alloc: 234881024 data_used: 22933504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.744583130s of 15.027527809s, submitted: 96
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369392 data_alloc: 234881024 data_used: 23019520
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042746544s of 13.056042671s, submitted: 4
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf1da1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bd4cef00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf4c7680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.141063690s of 20.330921173s, submitted: 61
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b2b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b23c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b2d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b3680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bcfcfe00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157691 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf7be5a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 21372928 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s#012Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.318338394s of 17.361791611s, submitted: 13
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109363200 unmapped: 20733952 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213389 data_alloc: 234881024 data_used: 12693504
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 21151744 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 21094400 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x1557e8a/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220401 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218617 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.672043800s of 14.786133766s, submitted: 55
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218857 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf08a780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08ab40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf08bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08ba40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53b4a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf53ab40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf53a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53ad20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53af00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3c00 session 0x5634bf53ba40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.216509819s of 12.251233101s, submitted: 6
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634c06b0f00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109617152 unmapped: 20480000 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 20471808 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 18989056 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291410 data_alloc: 234881024 data_used: 17616896
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291578 data_alloc: 234881024 data_used: 17616896
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.640722275s of 12.708586693s, submitted: 10
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 18825216 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298216 data_alloc: 234881024 data_used: 17735680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300012 data_alloc: 234881024 data_used: 17735680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfe221e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 20668416 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c029ad20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224813 data_alloc: 234881024 data_used: 12890112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.472726822s of 12.585700035s, submitted: 33
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634c06b0b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634bf7c1680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bd20b2c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.771172523s of 14.867496490s, submitted: 30
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 22495232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 21422080 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 21192704 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 21028864 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b2c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c05a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd463e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b3680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bef0d680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 24567808 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 24535040 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe34a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 24518656 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 24141824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 21692416 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21618688 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.294692993s of 25.236562729s, submitted: 260
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 14884864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd23a800 session 0x5634bcfe05a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330139 data_alloc: 234881024 data_used: 20570112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.572525024s of 26.743309021s, submitted: 82
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6960
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3000 session 0x5634c06b05a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfe23c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350053 data_alloc: 234881024 data_used: 20570112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972c000/0x0/0x4ffc00000, data 0x1e7ce8a/0x1f40000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634bfd5e780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350693 data_alloc: 234881024 data_used: 20570112
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 14876672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 12206080 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.929925919s of 18.974597931s, submitted: 10
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 8585216 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9393000/0x0/0x4ffc00000, data 0x220de8a/0x22d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412031 data_alloc: 234881024 data_used: 24842240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.664098740s of 16.812852859s, submitted: 44
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125435904 unmapped: 8863744 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.351060867s of 15.361025810s, submitted: 14
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411143 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: mgrc ms_handle_reset ms_handle_reset con 0x5634bddcfc00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: mgrc handle_mgr_configure stats_period=5
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bf4570e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538800 session 0x5634bfa2bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634c06e6d20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf225680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.096628189s of 21.114942551s, submitted: 5
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82ed20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfe32c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634be1ad680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd20b680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.783638000s of 33.895526886s, submitted: 33
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7c4b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7c41e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7c52c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c02e0b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e01e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7be960
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7bfc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7bf4a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260866 data_alloc: 234881024 data_used: 18022400
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf82eb40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.973909378s of 12.041707993s, submitted: 15
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82fc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c52c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53a000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfcf860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b05a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.199029922s of 17.423206329s, submitted: 27
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfcb32c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bcfe14a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e72c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6f00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029a5a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226040 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18de00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 23732224 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf444000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e1860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227854 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 23715840 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c9eaa/0x148f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf82f0e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.103581429s of 11.147413254s, submitted: 8
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7c52c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bf7bf4a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bfa2a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfa2bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bfa2a3c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712280273s of 27.752235413s, submitted: 13
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 29245440 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfa2a000
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf2252c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224960
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf08a1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7bef00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.442607880s of 10.648617744s, submitted: 19
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bdc7c780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 29261824 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 29204480 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 28254208 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.894562721s of 12.903012276s, submitted: 2
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261912 data_alloc: 234881024 data_used: 16449536
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 27590656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 27582464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.734338760s of 14.816822052s, submitted: 27
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634c02e01e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bcfe1a40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270054 data_alloc: 234881024 data_used: 16560128
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf08be00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.250144958s of 21.353006363s, submitted: 31
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe2b40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf1da5a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfcb30e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82eb40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c029af00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237405 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfa2ab40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 27533312 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 27664384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bf445a40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea400 session 0x5634bf7bfe00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd4614a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf029e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.969479561s of 16.080394745s, submitted: 32
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfe225a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634c06cd680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea800 session 0x5634bf08a960
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e14a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06e65a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303056 data_alloc: 234881024 data_used: 16138240
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 21438464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af9000/0x0/0x4ffc00000, data 0x1aa6e9a/0x1b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfcb25a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336043 data_alloc: 234881024 data_used: 17305600
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 22544384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 22536192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 21209088 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558697701s of 10.774977684s, submitted: 67
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.470090866s of 10.473713875s, submitted: 1
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 19791872 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 19660800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374765 data_alloc: 234881024 data_used: 21581824
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382269 data_alloc: 234881024 data_used: 21577728
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124182528 unmapped: 18522112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bcbf9e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591684341s of 11.705703735s, submitted: 37
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634c06b03c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377601 data_alloc: 234881024 data_used: 21577728
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 20422656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bf7be1e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9f1c000/0x0/0x4ffc00000, data 0x168ce8a/0x1750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bf4450e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634c06e72c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 23543808 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf2245a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd463e00
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd4632c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bf443c20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.324642181s of 25.657859802s, submitted: 76
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 17776640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634be1ada40
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf7c10e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf53a3c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53b4a0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd031680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285564 data_alloc: 234881024 data_used: 12169216
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b680
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20bc20
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20b860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029b2c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 23994368 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287325 data_alloc: 234881024 data_used: 12169216
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.633270264s of 20.794521332s, submitted: 35
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 13688832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 13852672 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f68000/0x0/0x4ffc00000, data 0x2227e9a/0x22ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8ebf000/0x0/0x4ffc00000, data 0x22d8e9a/0x239d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457277 data_alloc: 234881024 data_used: 22888448
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 13672448 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454133 data_alloc: 234881024 data_used: 22888448
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613185883s of 11.909707069s, submitted: 129
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 234881024 data_used: 22888448
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf82f0e0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bfcb3860
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b2c0
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 20553728 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 20537344 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 20865024 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:05:56 np0005533252 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 05:05:56 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 05:05:56 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2847382315' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 05:05:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 05:05:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2915704996' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 05:05:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:05:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:05:57 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 05:05:57 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2950687480' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 05:05:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 05:05:58 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2888619743' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 05:05:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:58.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1381658976' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387178002' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 05:05:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:05:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:05:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2839996989' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 05:05:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/632773887' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3737416269' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2953229075' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 05:06:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1925814334' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 05:06:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:00.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1699553728' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 05:06:01 np0005533252 nova_compute[230010]: 2025-11-24 10:06:01.236 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:01 np0005533252 nova_compute[230010]: 2025-11-24 10:06:01.301 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:01 np0005533252 podman[247011]: 2025-11-24 10:06:01.35169983 +0000 UTC m=+0.082361245 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2545531661' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 05:06:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:01 np0005533252 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3919931828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4204316792' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:06:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3987749054' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328823182' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4061091675' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 05:06:02 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 05:06:02 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 05:06:02 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811620630' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 05:06:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 05:06:04 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110595336' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 05:06:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 05:06:04 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2200727252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 05:06:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 05:06:05 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/812305037' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 05:06:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:05.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:05 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 05:06:05 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1332274527' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 05:06:06 np0005533252 nova_compute[230010]: 2025-11-24 10:06:06.238 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:06 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:06:06 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:06:06 np0005533252 nova_compute[230010]: 2025-11-24 10:06:06.304 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:06 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 05:06:06 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2999518195' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 05:06:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:06.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:07 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 24 05:06:07 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1880514736' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2982781899' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811800393' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 05:06:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 24 05:06:08 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080032816' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 05:06:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:06:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:06:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 24 05:06:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2135132105' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 05:06:10 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 24 05:06:10 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3311372601' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 05:06:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:11 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 24 05:06:11 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209386023' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 05:06:11 np0005533252 nova_compute[230010]: 2025-11-24 10:06:11.240 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:11 np0005533252 nova_compute[230010]: 2025-11-24 10:06:11.305 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:11.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 24 05:06:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2394133039' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 05:06:12 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 24 05:06:12 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3082189847' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 05:06:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:13 np0005533252 ovs-appctl[248965]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 05:06:13 np0005533252 ovs-appctl[248973]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 05:06:13 np0005533252 ovs-appctl[248979]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 05:06:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3408717226' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3492086322' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:06:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:06:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:15.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 05:06:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2984097938' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 05:06:16 np0005533252 nova_compute[230010]: 2025-11-24 10:06:16.243 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:16 np0005533252 nova_compute[230010]: 2025-11-24 10:06:16.307 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:16 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 24 05:06:16 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/827779880' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 05:06:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:17.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:17 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 05:06:17 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1759652024' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3181046644' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 05:06:18 np0005533252 podman[250600]: 2025-11-24 10:06:18.348734495 +0000 UTC m=+0.078160302 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1404093567' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 24 05:06:18 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3837430605' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 05:06:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:19.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 24 05:06:19 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1692125416' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 05:06:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.069 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:06:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:06:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:06:20.070 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:06:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 24 05:06:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2133798779' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 05:06:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 24 05:06:21 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1885530590' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 05:06:21 np0005533252 nova_compute[230010]: 2025-11-24 10:06:21.249 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:21 np0005533252 nova_compute[230010]: 2025-11-24 10:06:21.309 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:21.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:21 np0005533252 nova_compute[230010]: 2025-11-24 10:06:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.161431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782161545, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2514, "num_deletes": 507, "total_data_size": 5264967, "memory_usage": 5337648, "flush_reason": "Manual Compaction"}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782185332, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3407008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33657, "largest_seqno": 36166, "table_properties": {"data_size": 3396428, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 28259, "raw_average_key_size": 20, "raw_value_size": 3371900, "raw_average_value_size": 2413, "num_data_blocks": 257, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978623, "oldest_key_time": 1763978623, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 23988 microseconds, and 10501 cpu microseconds.
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185433) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3407008 bytes OK
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185463) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186776) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186789) EVENT_LOG_v1 {"time_micros": 1763978782186785, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5251950, prev total WAL file size 5251950, number of live WAL files 2.
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.188050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3327KB)], [63(13MB)]
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782188122, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 17516738, "oldest_snapshot_seqno": -1}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6627 keys, 16035146 bytes, temperature: kUnknown
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782271618, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 16035146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15989410, "index_size": 28088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172891, "raw_average_key_size": 26, "raw_value_size": 15868704, "raw_average_value_size": 2394, "num_data_blocks": 1114, "num_entries": 6627, "num_filter_entries": 6627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.271838) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16035146 bytes
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.6 rd, 191.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 13.5 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.7) OK, records in: 7656, records dropped: 1029 output_compression: NoCompression
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273591) EVENT_LOG_v1 {"time_micros": 1763978782273577, "job": 38, "event": "compaction_finished", "compaction_time_micros": 83561, "compaction_time_cpu_micros": 31701, "output_level": 6, "num_output_files": 1, "total_output_size": 16035146, "num_input_records": 7656, "num_output_records": 6627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782274288, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782277064, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.187937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:06:22.277151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2599335431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 05:06:22 np0005533252 nova_compute[230010]: 2025-11-24 10:06:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 24 05:06:22 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2897248161' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 05:06:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:23 np0005533252 nova_compute[230010]: 2025-11-24 10:06:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:23 np0005533252 nova_compute[230010]: 2025-11-24 10:06:23.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:06:24 np0005533252 podman[251107]: 2025-11-24 10:06:24.504368804 +0000 UTC m=+0.099568200 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 05:06:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 24 05:06:24 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2550223427' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 05:06:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:25 np0005533252 nova_compute[230010]: 2025-11-24 10:06:25.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:26 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 05:06:26 np0005533252 nova_compute[230010]: 2025-11-24 10:06:26.252 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:26 np0005533252 nova_compute[230010]: 2025-11-24 10:06:26.311 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 24 05:06:26 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1330152833' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 05:06:26 np0005533252 systemd[1]: Starting Time & Date Service...
Nov 24 05:06:26 np0005533252 systemd[1]: Started Time & Date Service.
Nov 24 05:06:26 np0005533252 nova_compute[230010]: 2025-11-24 10:06:26.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:06:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:06:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:27.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:27 np0005533252 nova_compute[230010]: 2025-11-24 10:06:27.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.002 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.002 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.003 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:06:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:06:28 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2719288102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.496 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.654 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.655 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4669MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.655 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.656 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.790 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:06:28 np0005533252 nova_compute[230010]: 2025-11-24 10:06:28.812 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:06:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:06:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/416144228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:06:29 np0005533252 nova_compute[230010]: 2025-11-24 10:06:29.265 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:06:29 np0005533252 nova_compute[230010]: 2025-11-24 10:06:29.273 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:06:29 np0005533252 nova_compute[230010]: 2025-11-24 10:06:29.293 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:06:29 np0005533252 nova_compute[230010]: 2025-11-24 10:06:29.295 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:06:29 np0005533252 nova_compute[230010]: 2025-11-24 10:06:29.295 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:06:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:06:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:06:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.254 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.296 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.314 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:31 np0005533252 nova_compute[230010]: 2025-11-24 10:06:31.340 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:06:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:31.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:32 np0005533252 podman[251734]: 2025-11-24 10:06:32.323336739 +0000 UTC m=+0.062878924 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 05:06:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:06:33 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2914 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2050 writes, 6533 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 7.27 MB, 0.01 MB/s#012Interval WAL: 2050 writes, 892 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 05:06:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:35.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:36 np0005533252 nova_compute[230010]: 2025-11-24 10:06:36.258 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:36 np0005533252 nova_compute[230010]: 2025-11-24 10:06:36.318 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:41 np0005533252 nova_compute[230010]: 2025-11-24 10:06:41.263 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:41 np0005533252 nova_compute[230010]: 2025-11-24 10:06:41.320 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:06:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:45.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:06:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:06:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:06:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:46 np0005533252 nova_compute[230010]: 2025-11-24 10:06:46.266 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:46 np0005533252 nova_compute[230010]: 2025-11-24 10:06:46.322 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:47.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:49.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:49 np0005533252 podman[251761]: 2025-11-24 10:06:49.347634914 +0000 UTC m=+0.087530522 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:06:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:51.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:51 np0005533252 nova_compute[230010]: 2025-11-24 10:06:51.269 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:51 np0005533252 nova_compute[230010]: 2025-11-24 10:06:51.325 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:55.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:55 np0005533252 podman[251810]: 2025-11-24 10:06:55.353650529 +0000 UTC m=+0.095537371 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 05:06:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:55.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:56 np0005533252 nova_compute[230010]: 2025-11-24 10:06:56.272 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:56 np0005533252 nova_compute[230010]: 2025-11-24 10:06:56.330 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:06:56 np0005533252 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 05:06:56 np0005533252 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 05:06:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.006000144s ======
Nov 24 05:06:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:57.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000144s
Nov 24 05:06:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:59.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:06:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:06:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:06:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:06:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:59.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:07:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:07:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:01.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:01 np0005533252 nova_compute[230010]: 2025-11-24 10:07:01.277 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:01 np0005533252 nova_compute[230010]: 2025-11-24 10:07:01.334 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:01.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:02 np0005533252 podman[251846]: 2025-11-24 10:07:02.516470257 +0000 UTC m=+0.059804288 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.605474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822605516, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 251, "total_data_size": 1205587, "memory_usage": 1233264, "flush_reason": "Manual Compaction"}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822612579, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 793238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36171, "largest_seqno": 36853, "table_properties": {"data_size": 789716, "index_size": 1366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8663, "raw_average_key_size": 19, "raw_value_size": 782517, "raw_average_value_size": 1803, "num_data_blocks": 60, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978782, "oldest_key_time": 1763978782, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7136 microseconds, and 3278 cpu microseconds.
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.612613) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 793238 bytes OK
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.612635) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615938) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615953) EVENT_LOG_v1 {"time_micros": 1763978822615948, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1201703, prev total WAL file size 1201703, number of live WAL files 2.
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.616733) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(774KB)], [66(15MB)]
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822616773, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 16828384, "oldest_snapshot_seqno": -1}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6550 keys, 14696989 bytes, temperature: kUnknown
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822706684, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 14696989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14652718, "index_size": 26815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172276, "raw_average_key_size": 26, "raw_value_size": 14534312, "raw_average_value_size": 2218, "num_data_blocks": 1057, "num_entries": 6550, "num_filter_entries": 6550, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.706958) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14696989 bytes
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.710215) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.0 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(39.7) write-amplify(18.5) OK, records in: 7061, records dropped: 511 output_compression: NoCompression
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.710235) EVENT_LOG_v1 {"time_micros": 1763978822710226, "job": 40, "event": "compaction_finished", "compaction_time_micros": 90007, "compaction_time_cpu_micros": 44669, "output_level": 6, "num_output_files": 1, "total_output_size": 14696989, "num_input_records": 7061, "num_output_records": 6550, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822710688, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822714085, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.616637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:07:02 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6892 writes, 36K keys, 6892 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 6892 writes, 6892 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1550 writes, 8363 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 18.03 MB, 0.03 MB/s#012Interval WAL: 1550 writes, 1550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    143.1      0.39              0.14        20    0.019       0      0       0.0       0.0#012  L6      1/0   14.02 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    165.5    142.0      1.76              0.60        19    0.093    109K    10K       0.0       0.0#012 Sum      1/0   14.02 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    135.7    142.2      2.14              0.73        39    0.055    109K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2    163.4    164.0      0.50              0.20        10    0.050     34K   3588       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    165.5    142.0      1.76              0.60        19    0.093    109K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    143.8      0.38              0.14        19    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.30 GB write, 0.13 MB/s write, 0.28 GB read, 0.12 MB/s read, 2.1 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a5fe7f5350#2 capacity: 304.00 MB usage: 26.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000202 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1606,25.86 MB,8.50756%) FilterBlock(39,320.42 KB,0.102932%) IndexBlock(39,553.73 KB,0.17788%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 05:07:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:03.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:05.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:07:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:07:06 np0005533252 nova_compute[230010]: 2025-11-24 10:07:06.280 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:06 np0005533252 nova_compute[230010]: 2025-11-24 10:07:06.335 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:07.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:09.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:11.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:11 np0005533252 nova_compute[230010]: 2025-11-24 10:07:11.284 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:11 np0005533252 nova_compute[230010]: 2025-11-24 10:07:11.336 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:15.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:07:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:07:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:16 np0005533252 nova_compute[230010]: 2025-11-24 10:07:16.290 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:16 np0005533252 nova_compute[230010]: 2025-11-24 10:07:16.341 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:18.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:19 np0005533252 systemd[1]: session-55.scope: Deactivated successfully.
Nov 24 05:07:19 np0005533252 systemd[1]: session-55.scope: Consumed 2min 57.987s CPU time, 780.2M memory peak, read 309.6M from disk, written 232.6M to disk.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Session 55 logged out. Waiting for processes to exit.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Removed session 55.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: New session 56 of user zuul.
Nov 24 05:07:19 np0005533252 systemd[1]: Started Session 56 of User zuul.
Nov 24 05:07:19 np0005533252 podman[251902]: 2025-11-24 10:07:19.487551448 +0000 UTC m=+0.086092118 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:07:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:19 np0005533252 systemd[1]: session-56.scope: Deactivated successfully.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Session 56 logged out. Waiting for processes to exit.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Removed session 56.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: New session 57 of user zuul.
Nov 24 05:07:19 np0005533252 systemd[1]: Started Session 57 of User zuul.
Nov 24 05:07:19 np0005533252 systemd[1]: session-57.scope: Deactivated successfully.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Session 57 logged out. Waiting for processes to exit.
Nov 24 05:07:19 np0005533252 systemd-logind[823]: Removed session 57.
Nov 24 05:07:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:07:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:07:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:07:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:07:20.071 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:07:20 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:21 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:07:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:21 np0005533252 nova_compute[230010]: 2025-11-24 10:07:21.295 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:21 np0005533252 nova_compute[230010]: 2025-11-24 10:07:21.343 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:21 np0005533252 nova_compute[230010]: 2025-11-24 10:07:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:21 np0005533252 nova_compute[230010]: 2025-11-24 10:07:21.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:23 np0005533252 nova_compute[230010]: 2025-11-24 10:07:23.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:23 np0005533252 nova_compute[230010]: 2025-11-24 10:07:23.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:23 np0005533252 nova_compute[230010]: 2025-11-24 10:07:23.779 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:07:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:07:25 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:07:25 np0005533252 nova_compute[230010]: 2025-11-24 10:07:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:25 np0005533252 nova_compute[230010]: 2025-11-24 10:07:25.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 05:07:25 np0005533252 nova_compute[230010]: 2025-11-24 10:07:25.820 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 05:07:25 np0005533252 podman[252086]: 2025-11-24 10:07:25.839935528 +0000 UTC m=+0.161391789 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 05:07:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:26 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:26 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:07:26 np0005533252 nova_compute[230010]: 2025-11-24 10:07:26.298 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:26 np0005533252 nova_compute[230010]: 2025-11-24 10:07:26.343 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:26 np0005533252 nova_compute[230010]: 2025-11-24 10:07:26.813 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:27 np0005533252 nova_compute[230010]: 2025-11-24 10:07:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:07:28 np0005533252 nova_compute[230010]: 2025-11-24 10:07:28.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:07:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:07:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3520934719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.259 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.471 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4886MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.472 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:07:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.650 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.651 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:07:29 np0005533252 nova_compute[230010]: 2025-11-24 10:07:29.682 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:07:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:30.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:07:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4056666050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:07:30 np0005533252 nova_compute[230010]: 2025-11-24 10:07:30.143 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:07:30 np0005533252 nova_compute[230010]: 2025-11-24 10:07:30.149 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:07:30 np0005533252 nova_compute[230010]: 2025-11-24 10:07:30.168 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:07:30 np0005533252 nova_compute[230010]: 2025-11-24 10:07:30.170 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:07:30 np0005533252 nova_compute[230010]: 2025-11-24 10:07:30.170 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:07:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:07:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:07:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:31.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.170 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.192 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.193 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.194 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.213 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.214 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.301 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:31 np0005533252 nova_compute[230010]: 2025-11-24 10:07:31.347 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:32 np0005533252 nova_compute[230010]: 2025-11-24 10:07:32.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:33.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:33 np0005533252 podman[252187]: 2025-11-24 10:07:33.315455529 +0000 UTC m=+0.050069198 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 05:07:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:36 np0005533252 nova_compute[230010]: 2025-11-24 10:07:36.304 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:36 np0005533252 nova_compute[230010]: 2025-11-24 10:07:36.348 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 05:07:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 05:07:37 np0005533252 nova_compute[230010]: 2025-11-24 10:07:37.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:07:37 np0005533252 nova_compute[230010]: 2025-11-24 10:07:37.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 05:07:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:39.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:41.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:41 np0005533252 nova_compute[230010]: 2025-11-24 10:07:41.307 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:41 np0005533252 nova_compute[230010]: 2025-11-24 10:07:41.350 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:43.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:45.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:07:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:07:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:46 np0005533252 nova_compute[230010]: 2025-11-24 10:07:46.311 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:46 np0005533252 nova_compute[230010]: 2025-11-24 10:07:46.352 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:48.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:49.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:49 np0005533252 podman[252240]: 2025-11-24 10:07:49.925557546 +0000 UTC m=+0.066746750 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 05:07:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:50.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:51 np0005533252 nova_compute[230010]: 2025-11-24 10:07:51.314 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:51 np0005533252 nova_compute[230010]: 2025-11-24 10:07:51.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 05:07:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 05:07:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:53.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:07:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:56 np0005533252 nova_compute[230010]: 2025-11-24 10:07:56.320 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:56 np0005533252 nova_compute[230010]: 2025-11-24 10:07:56.357 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:07:56 np0005533252 podman[252265]: 2025-11-24 10:07:56.405626682 +0000 UTC m=+0.129551320 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:07:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:07:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:07:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:07:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:07:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:07:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:07:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:59.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:07:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:08:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:08:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:01 np0005533252 nova_compute[230010]: 2025-11-24 10:08:01.326 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:01 np0005533252 nova_compute[230010]: 2025-11-24 10:08:01.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:04 np0005533252 podman[252296]: 2025-11-24 10:08:04.316229622 +0000 UTC m=+0.059673895 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 05:08:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:06.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:06 np0005533252 nova_compute[230010]: 2025-11-24 10:08:06.329 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:06 np0005533252 nova_compute[230010]: 2025-11-24 10:08:06.360 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:07.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:11 np0005533252 nova_compute[230010]: 2025-11-24 10:08:11.334 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:11 np0005533252 nova_compute[230010]: 2025-11-24 10:08:11.363 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:13.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:08:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:08:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:16.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:16 np0005533252 nova_compute[230010]: 2025-11-24 10:08:16.338 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:16 np0005533252 nova_compute[230010]: 2025-11-24 10:08:16.365 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:17.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:18.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:08:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:08:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:08:20.074 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:08:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:20.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:20 np0005533252 podman[252351]: 2025-11-24 10:08:20.340635648 +0000 UTC m=+0.074915742 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 05:08:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:21 np0005533252 nova_compute[230010]: 2025-11-24 10:08:21.355 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:21 np0005533252 nova_compute[230010]: 2025-11-24 10:08:21.368 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:21 np0005533252 nova_compute[230010]: 2025-11-24 10:08:21.774 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:22.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:23 np0005533252 nova_compute[230010]: 2025-11-24 10:08:23.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:24 np0005533252 nova_compute[230010]: 2025-11-24 10:08:24.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:24 np0005533252 nova_compute[230010]: 2025-11-24 10:08:24.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:08:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 05:08:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 05:08:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 05:08:26 np0005533252 nova_compute[230010]: 2025-11-24 10:08:26.358 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:26 np0005533252 nova_compute[230010]: 2025-11-24 10:08:26.369 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:26 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 05:08:26 np0005533252 podman[252469]: 2025-11-24 10:08:26.654363452 +0000 UTC m=+0.104712218 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 05:08:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:27.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.271160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907271211, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 251, "total_data_size": 2584771, "memory_usage": 2606440, "flush_reason": "Manual Compaction"}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907297700, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1115589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36858, "largest_seqno": 37933, "table_properties": {"data_size": 1111576, "index_size": 1665, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10536, "raw_average_key_size": 20, "raw_value_size": 1103012, "raw_average_value_size": 2197, "num_data_blocks": 71, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978823, "oldest_key_time": 1763978823, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26581 microseconds, and 4161 cpu microseconds.
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.297746) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1115589 bytes OK
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.297770) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320276) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320306) EVENT_LOG_v1 {"time_micros": 1763978907320298, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.320338) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2579468, prev total WAL file size 2579468, number of live WAL files 2.
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.321610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303037' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1089KB)], [69(14MB)]
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907321693, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 15812578, "oldest_snapshot_seqno": -1}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6567 keys, 12256014 bytes, temperature: kUnknown
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907382081, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12256014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12215273, "index_size": 23221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172778, "raw_average_key_size": 26, "raw_value_size": 12100073, "raw_average_value_size": 1842, "num_data_blocks": 909, "num_entries": 6567, "num_filter_entries": 6567, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.382594) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12256014 bytes
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.409197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 261.2 rd, 202.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(25.2) write-amplify(11.0) OK, records in: 7052, records dropped: 485 output_compression: NoCompression
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.409218) EVENT_LOG_v1 {"time_micros": 1763978907409209, "job": 42, "event": "compaction_finished", "compaction_time_micros": 60529, "compaction_time_cpu_micros": 25177, "output_level": 6, "num_output_files": 1, "total_output_size": 12256014, "num_input_records": 7052, "num_output_records": 6567, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907409666, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907412127, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.321491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:08:27.412261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:08:27 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:08:27 np0005533252 nova_compute[230010]: 2025-11-24 10:08:27.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:28.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:28 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:28 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:28 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:08:28 np0005533252 nova_compute[230010]: 2025-11-24 10:08:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:08:29 np0005533252 nova_compute[230010]: 2025-11-24 10:08:29.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:08:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:30.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:08:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/463285442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.264 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:08:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:08:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.454 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.455 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.456 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.456 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.599 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.599 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:08:30 np0005533252 nova_compute[230010]: 2025-11-24 10:08:30.620 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:08:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:08:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3228508666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.071 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.078 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.094 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.097 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.097 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:08:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:31.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.362 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.373 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.498 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.499 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.499 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.517 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.517 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:31 np0005533252 nova_compute[230010]: 2025-11-24 10:08:31.519 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:08:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:08:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:32.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:32 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:32 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:08:32 np0005533252 nova_compute[230010]: 2025-11-24 10:08:32.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:08:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:33.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:34.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:35.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:35 np0005533252 podman[252650]: 2025-11-24 10:08:35.318540264 +0000 UTC m=+0.056500386 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 05:08:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:36.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:36 np0005533252 nova_compute[230010]: 2025-11-24 10:08:36.365 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:36 np0005533252 nova_compute[230010]: 2025-11-24 10:08:36.374 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:41 np0005533252 nova_compute[230010]: 2025-11-24 10:08:41.367 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:41 np0005533252 nova_compute[230010]: 2025-11-24 10:08:41.377 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:43.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:45.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:08:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:08:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:46.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:46 np0005533252 nova_compute[230010]: 2025-11-24 10:08:46.371 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:46 np0005533252 nova_compute[230010]: 2025-11-24 10:08:46.379 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:48.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:51 np0005533252 podman[252703]: 2025-11-24 10:08:51.373546707 +0000 UTC m=+0.106872051 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 05:08:51 np0005533252 nova_compute[230010]: 2025-11-24 10:08:51.374 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:51 np0005533252 nova_compute[230010]: 2025-11-24 10:08:51.381 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:52.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:53.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:08:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:54.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:08:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:08:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:56 np0005533252 nova_compute[230010]: 2025-11-24 10:08:56.377 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:56 np0005533252 nova_compute[230010]: 2025-11-24 10:08:56.383 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:08:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:57 np0005533252 podman[252728]: 2025-11-24 10:08:57.417093865 +0000 UTC m=+0.157533741 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:08:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:58.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:08:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:08:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:59.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:08:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:09:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:09:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:01.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:01 np0005533252 nova_compute[230010]: 2025-11-24 10:09:01.381 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:01 np0005533252 nova_compute[230010]: 2025-11-24 10:09:01.384 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:03.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:06.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:06 np0005533252 podman[252759]: 2025-11-24 10:09:06.357706259 +0000 UTC m=+0.095323055 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:09:06 np0005533252 nova_compute[230010]: 2025-11-24 10:09:06.385 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:06 np0005533252 nova_compute[230010]: 2025-11-24 10:09:06.388 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:07.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:09.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:11.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:11 np0005533252 nova_compute[230010]: 2025-11-24 10:09:11.389 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:11 np0005533252 nova_compute[230010]: 2025-11-24 10:09:11.391 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 05:09:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 05:09:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:09:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:09:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:16.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:16 np0005533252 nova_compute[230010]: 2025-11-24 10:09:16.391 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:16 np0005533252 nova_compute[230010]: 2025-11-24 10:09:16.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:18.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.075 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:09:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.076 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:09:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:09:20.076 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:09:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:20.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.392 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.394 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.396 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:21 np0005533252 nova_compute[230010]: 2025-11-24 10:09:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:22.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:22 np0005533252 podman[252811]: 2025-11-24 10:09:22.342417548 +0000 UTC m=+0.085833991 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 05:09:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:23.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:24.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:25 np0005533252 nova_compute[230010]: 2025-11-24 10:09:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:26.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:26 np0005533252 nova_compute[230010]: 2025-11-24 10:09:26.396 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:26 np0005533252 nova_compute[230010]: 2025-11-24 10:09:26.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:26 np0005533252 nova_compute[230010]: 2025-11-24 10:09:26.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:09:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:27 np0005533252 nova_compute[230010]: 2025-11-24 10:09:27.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:28 np0005533252 podman[252835]: 2025-11-24 10:09:28.355945789 +0000 UTC m=+0.099035847 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 05:09:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:09:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:09:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:09:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:09:30 np0005533252 nova_compute[230010]: 2025-11-24 10:09:30.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:30 np0005533252 nova_compute[230010]: 2025-11-24 10:09:30.786 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:31.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.399 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.788 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.789 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:09:31 np0005533252 nova_compute[230010]: 2025-11-24 10:09:31.790 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:09:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:09:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/416098763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.237 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:09:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:32.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.440 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.441 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4881MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.442 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.442 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.517 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.517 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.545 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.746 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.747 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.765 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.805 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 05:09:32 np0005533252 nova_compute[230010]: 2025-11-24 10:09:32.864 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2882460740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:09:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:33 np0005533252 nova_compute[230010]: 2025-11-24 10:09:33.318 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:09:33 np0005533252 nova_compute[230010]: 2025-11-24 10:09:33.324 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:09:33 np0005533252 nova_compute[230010]: 2025-11-24 10:09:33.356 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:09:33 np0005533252 nova_compute[230010]: 2025-11-24 10:09:33.359 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:09:33 np0005533252 nova_compute[230010]: 2025-11-24 10:09:33.360 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:09:33 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:09:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.361 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.383 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.383 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:34 np0005533252 nova_compute[230010]: 2025-11-24 10:09:34.768 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:09:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:09:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:35.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:09:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.401 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.403 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.404 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:36 np0005533252 nova_compute[230010]: 2025-11-24 10:09:36.406 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:37 np0005533252 podman[253018]: 2025-11-24 10:09:37.32294631 +0000 UTC m=+0.058194729 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 05:09:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:09:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:09:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:38.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:38 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:09:38 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:09:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:40.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:41 np0005533252 nova_compute[230010]: 2025-11-24 10:09:41.402 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:41 np0005533252 nova_compute[230010]: 2025-11-24 10:09:41.406 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:09:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:09:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.408 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.409 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.409 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.410 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.448 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:46 np0005533252 nova_compute[230010]: 2025-11-24 10:09:46.449 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:09:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:51 np0005533252 nova_compute[230010]: 2025-11-24 10:09:51.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:53 np0005533252 podman[253097]: 2025-11-24 10:09:53.321150846 +0000 UTC m=+0.059128541 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:09:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:09:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:09:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:09:56 np0005533252 nova_compute[230010]: 2025-11-24 10:09:56.451 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:09:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:09:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:09:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:09:59 np0005533252 podman[253121]: 2025-11-24 10:09:59.367268565 +0000 UTC m=+0.109953438 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 24 05:09:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:00.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:10:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:10:00 np0005533252 ceph-mon[80009]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 05:10:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.453 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.455 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:01 np0005533252 nova_compute[230010]: 2025-11-24 10:10:01.456 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:04.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:06.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:06 np0005533252 nova_compute[230010]: 2025-11-24 10:10:06.456 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:08 np0005533252 podman[253154]: 2025-11-24 10:10:08.317527999 +0000 UTC m=+0.057757277 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 05:10:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.457 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.459 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.460 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:11 np0005533252 nova_compute[230010]: 2025-11-24 10:10:11.461 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:13.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:10:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:10:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:10:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:10:16 np0005533252 nova_compute[230010]: 2025-11-24 10:10:16.460 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.077 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:10:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.078 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:10:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:20.078 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:10:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:21.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:21 np0005533252 nova_compute[230010]: 2025-11-24 10:10:21.461 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:21 np0005533252 nova_compute[230010]: 2025-11-24 10:10:21.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:22.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:24 np0005533252 nova_compute[230010]: 2025-11-24 10:10:24.103 230014 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:10:24 np0005533252 nova_compute[230010]: 2025-11-24 10:10:24.132 230014 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:10:24 np0005533252 podman[253207]: 2025-11-24 10:10:24.396178748 +0000 UTC m=+0.130573282 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 05:10:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:25 np0005533252 nova_compute[230010]: 2025-11-24 10:10:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:26 np0005533252 nova_compute[230010]: 2025-11-24 10:10:26.462 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:27 np0005533252 nova_compute[230010]: 2025-11-24 10:10:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:27 np0005533252 nova_compute[230010]: 2025-11-24 10:10:27.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:10:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:10:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:10:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:29 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:29.699 142336 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 05:10:29 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:29.700 142336 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 05:10:29 np0005533252 nova_compute[230010]: 2025-11-24 10:10:29.701 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:29 np0005533252 nova_compute[230010]: 2025-11-24 10:10:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:30 np0005533252 podman[253231]: 2025-11-24 10:10:30.420503831 +0000 UTC m=+0.149001255 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 05:10:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:10:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:10:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 05:10:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 05:10:30 np0005533252 nova_compute[230010]: 2025-11-24 10:10:30.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:31.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:31 np0005533252 nova_compute[230010]: 2025-11-24 10:10:31.463 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:31 np0005533252 nova_compute[230010]: 2025-11-24 10:10:31.466 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:32 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:10:32.702 142336 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=803b139a-7fca-4549-8597-645cf677225d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 05:10:32 np0005533252 nova_compute[230010]: 2025-11-24 10:10:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:33.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:10:33 np0005533252 nova_compute[230010]: 2025-11-24 10:10:33.793 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:10:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:10:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4143398664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.240 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.405 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4877MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.406 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.457 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.458 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:10:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.476 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:10:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:10:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672638891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.943 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.948 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.962 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.963 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:10:34 np0005533252 nova_compute[230010]: 2025-11-24 10:10:34.964 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:10:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:35.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:35 np0005533252 nova_compute[230010]: 2025-11-24 10:10:35.964 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:35 np0005533252 nova_compute[230010]: 2025-11-24 10:10:35.965 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:10:35 np0005533252 nova_compute[230010]: 2025-11-24 10:10:35.965 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:10:35 np0005533252 nova_compute[230010]: 2025-11-24 10:10:35.980 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:10:35 np0005533252 nova_compute[230010]: 2025-11-24 10:10:35.982 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.429704) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036429968, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 3700428, "memory_usage": 3776800, "flush_reason": "Manual Compaction"}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036446634, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37938, "largest_seqno": 39448, "table_properties": {"data_size": 2409563, "index_size": 3580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13829, "raw_average_key_size": 20, "raw_value_size": 2396602, "raw_average_value_size": 3488, "num_data_blocks": 156, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978907, "oldest_key_time": 1763978907, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 16996 microseconds, and 11463 cpu microseconds.
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446699) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2415944 bytes OK
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446729) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.448972) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449040) EVENT_LOG_v1 {"time_micros": 1763979036449026, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3693352, prev total WAL file size 3693352, number of live WAL files 2.
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.450609) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2359KB)], [72(11MB)]
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036450667, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 14671958, "oldest_snapshot_seqno": -1}
Nov 24 05:10:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:36 np0005533252 nova_compute[230010]: 2025-11-24 10:10:36.489 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6738 keys, 12552617 bytes, temperature: kUnknown
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036534768, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12552617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12510495, "index_size": 24154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 177095, "raw_average_key_size": 26, "raw_value_size": 12392045, "raw_average_value_size": 1839, "num_data_blocks": 946, "num_entries": 6738, "num_filter_entries": 6738, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.535228) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12552617 bytes
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.536881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.2 rd, 149.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 7254, records dropped: 516 output_compression: NoCompression
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.536953) EVENT_LOG_v1 {"time_micros": 1763979036536926, "job": 44, "event": "compaction_finished", "compaction_time_micros": 84230, "compaction_time_cpu_micros": 51362, "output_level": 6, "num_output_files": 1, "total_output_size": 12552617, "num_input_records": 7254, "num_output_records": 6738, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036538221, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036542798, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.450517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:36 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:10:36.543073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:10:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:37.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:38 np0005533252 podman[253354]: 2025-11-24 10:10:38.507367494 +0000 UTC m=+0.073876012 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:10:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:39.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:10:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:41 np0005533252 nova_compute[230010]: 2025-11-24 10:10:41.490 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:41 np0005533252 nova_compute[230010]: 2025-11-24 10:10:41.493 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:42.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:10:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:10:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:44.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:10:44 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:10:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:10:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:10:46 np0005533252 nova_compute[230010]: 2025-11-24 10:10:46.493 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:46.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:48.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:51.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.496 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.498 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.499 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.499 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.529 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:51 np0005533252 nova_compute[230010]: 2025-11-24 10:10:51.530 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:10:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:54.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:10:55 np0005533252 podman[253489]: 2025-11-24 10:10:55.371674454 +0000 UTC m=+0.106161514 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:10:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:56 np0005533252 nova_compute[230010]: 2025-11-24 10:10:56.530 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:10:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:10:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:10:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:10:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:10:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:10:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 05:10:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 05:11:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:11:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:11:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:01 np0005533252 podman[253512]: 2025-11-24 10:11:01.422264253 +0000 UTC m=+0.151461704 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 05:11:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.532 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.535 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:11:01 np0005533252 nova_compute[230010]: 2025-11-24 10:11:01.538 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:01 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 05:11:01 np0005533252 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 05:11:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:11:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:11:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:11:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:11:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:03.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:06 np0005533252 nova_compute[230010]: 2025-11-24 10:11:06.534 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:06 np0005533252 nova_compute[230010]: 2025-11-24 10:11:06.539 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:09 np0005533252 podman[253544]: 2025-11-24 10:11:09.335198773 +0000 UTC m=+0.063026647 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 24 05:11:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:10.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:11 np0005533252 nova_compute[230010]: 2025-11-24 10:11:11.538 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:11:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:11:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:16 np0005533252 nova_compute[230010]: 2025-11-24 10:11:16.541 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.079 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:11:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.079 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:11:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:11:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:11:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:20.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:21.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:21 np0005533252 nova_compute[230010]: 2025-11-24 10:11:21.541 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:21 np0005533252 nova_compute[230010]: 2025-11-24 10:11:21.544 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:22 np0005533252 nova_compute[230010]: 2025-11-24 10:11:22.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:25 np0005533252 nova_compute[230010]: 2025-11-24 10:11:25.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:26 np0005533252 podman[253598]: 2025-11-24 10:11:26.327957522 +0000 UTC m=+0.065961699 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 24 05:11:26 np0005533252 nova_compute[230010]: 2025-11-24 10:11:26.543 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:27 np0005533252 nova_compute[230010]: 2025-11-24 10:11:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:27 np0005533252 nova_compute[230010]: 2025-11-24 10:11:27.766 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:11:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:29.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:29 np0005533252 nova_compute[230010]: 2025-11-24 10:11:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:11:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:11:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:30.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:31.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:31 np0005533252 nova_compute[230010]: 2025-11-24 10:11:31.547 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:32 np0005533252 podman[253646]: 2025-11-24 10:11:32.390376821 +0000 UTC m=+0.120760671 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 05:11:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:32 np0005533252 nova_compute[230010]: 2025-11-24 10:11:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:33.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:33 np0005533252 nova_compute[230010]: 2025-11-24 10:11:33.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:33 np0005533252 nova_compute[230010]: 2025-11-24 10:11:33.777 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:34.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.792 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.793 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:11:35 np0005533252 nova_compute[230010]: 2025-11-24 10:11:35.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:11:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:11:36 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3225313100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.272 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.427 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.428 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4869MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.429 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.429 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.478 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.479 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.495 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.549 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:36.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:11:36 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2059603497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.987 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:11:36 np0005533252 nova_compute[230010]: 2025-11-24 10:11:36.993 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:11:37 np0005533252 nova_compute[230010]: 2025-11-24 10:11:37.009 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:11:37 np0005533252 nova_compute[230010]: 2025-11-24 10:11:37.011 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:11:37 np0005533252 nova_compute[230010]: 2025-11-24 10:11:37.012 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:11:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:38 np0005533252 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:11:38 np0005533252 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:11:38 np0005533252 nova_compute[230010]: 2025-11-24 10:11:38.012 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:11:38 np0005533252 nova_compute[230010]: 2025-11-24 10:11:38.027 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:11:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:38.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:40 np0005533252 podman[253720]: 2025-11-24 10:11:40.326161132 +0000 UTC m=+0.065418715 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 24 05:11:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:40.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:41 np0005533252 nova_compute[230010]: 2025-11-24 10:11:41.553 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:45 np0005533252 podman[253869]: 2025-11-24 10:11:45.386674407 +0000 UTC m=+0.086573322 container exec fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 05:11:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:11:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:11:45 np0005533252 podman[253869]: 2025-11-24 10:11:45.487631033 +0000 UTC m=+0.187529958 container exec_died fca3d6a645ca50145f34396c21cf8798c75622ec7e27bb7d7b9d2df471762abc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 05:11:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:46 np0005533252 podman[254007]: 2025-11-24 10:11:46.103876332 +0000 UTC m=+0.063283973 container exec 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 05:11:46 np0005533252 podman[254007]: 2025-11-24 10:11:46.116916471 +0000 UTC m=+0.076324122 container exec_died 8385dba62896146966763f0bcd6866f05f5474182998a6b8c2dabcbf77545f8c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 05:11:46 np0005533252 nova_compute[230010]: 2025-11-24 10:11:46.555 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:46 np0005533252 podman[254126]: 2025-11-24 10:11:46.618452307 +0000 UTC m=+0.062195665 container exec 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 05:11:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:46.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:46 np0005533252 podman[254126]: 2025-11-24 10:11:46.627033498 +0000 UTC m=+0.070776836 container exec_died 5e659f329edd66b319b97f09144add025da99dc20b0b6d44046c2f8d632eb914 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-1-rsdpvy)
Nov 24 05:11:46 np0005533252 podman[254193]: 2025-11-24 10:11:46.86613563 +0000 UTC m=+0.071703489 container exec b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, architecture=x86_64, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Nov 24 05:11:46 np0005533252 podman[254193]: 2025-11-24 10:11:46.876198067 +0000 UTC m=+0.081765906 container exec_died b150f4574d15a215dc003733c271f0cef75e4de7b269181ad25614a88f483866 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-1-vrgskq, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Nov 24 05:11:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 05:11:46 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 05:11:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:47 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:11:48 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:11:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:11:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:49.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:50.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:51.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.604 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5048 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.607 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:11:51 np0005533252 nova_compute[230010]: 2025-11-24 10:11:51.607 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 24 05:11:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:52.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:11:53 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:11:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:54 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:11:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:11:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:56 np0005533252 nova_compute[230010]: 2025-11-24 10:11:56.608 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 24 05:11:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:57 np0005533252 podman[254361]: 2025-11-24 10:11:57.383862895 +0000 UTC m=+0.102152425 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 05:11:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:11:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:11:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:11:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:11:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:11:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:11:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:11:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:12:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:12:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:00.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:02 np0005533252 nova_compute[230010]: 2025-11-24 10:12:02.024 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:03 np0005533252 podman[254386]: 2025-11-24 10:12:03.375057598 +0000 UTC m=+0.119224164 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 05:12:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:06 np0005533252 nova_compute[230010]: 2025-11-24 10:12:06.616 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:07 np0005533252 nova_compute[230010]: 2025-11-24 10:12:07.028 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:07.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:08.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:11 np0005533252 podman[254440]: 2025-11-24 10:12:11.177917519 +0000 UTC m=+0.066319387 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 05:12:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:11 np0005533252 nova_compute[230010]: 2025-11-24 10:12:11.664 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:12 np0005533252 nova_compute[230010]: 2025-11-24 10:12:12.030 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:13.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:12:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:12:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:16 np0005533252 nova_compute[230010]: 2025-11-24 10:12:16.665 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:17 np0005533252 nova_compute[230010]: 2025-11-24 10:12:17.033 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:17.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:18.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:12:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:12:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:12:20.080 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:12:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:12:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:12:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:21.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:21 np0005533252 nova_compute[230010]: 2025-11-24 10:12:21.667 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:21 np0005533252 nova_compute[230010]: 2025-11-24 10:12:21.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:22 np0005533252 nova_compute[230010]: 2025-11-24 10:12:22.035 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:24 np0005533252 nova_compute[230010]: 2025-11-24 10:12:24.775 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:26 np0005533252 nova_compute[230010]: 2025-11-24 10:12:26.671 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:27 np0005533252 nova_compute[230010]: 2025-11-24 10:12:27.037 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:27 np0005533252 nova_compute[230010]: 2025-11-24 10:12:27.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:28 np0005533252 podman[254469]: 2025-11-24 10:12:28.326708441 +0000 UTC m=+0.063749074 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 05:12:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:28 np0005533252 nova_compute[230010]: 2025-11-24 10:12:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:28 np0005533252 nova_compute[230010]: 2025-11-24 10:12:28.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:12:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:29 np0005533252 nova_compute[230010]: 2025-11-24 10:12:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:12:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:12:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:30.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:31 np0005533252 nova_compute[230010]: 2025-11-24 10:12:31.717 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:32 np0005533252 nova_compute[230010]: 2025-11-24 10:12:32.039 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:32 np0005533252 nova_compute[230010]: 2025-11-24 10:12:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:12:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:12:33 np0005533252 nova_compute[230010]: 2025-11-24 10:12:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:34 np0005533252 podman[254517]: 2025-11-24 10:12:34.371925907 +0000 UTC m=+0.102708619 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:12:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:35 np0005533252 nova_compute[230010]: 2025-11-24 10:12:35.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:36 np0005533252 nova_compute[230010]: 2025-11-24 10:12:36.719 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:36.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:36 np0005533252 nova_compute[230010]: 2025-11-24 10:12:36.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:36 np0005533252 nova_compute[230010]: 2025-11-24 10:12:36.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 05:12:36 np0005533252 nova_compute[230010]: 2025-11-24 10:12:36.779 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.040 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.779 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.780 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.795 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.796 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.824 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:12:37 np0005533252 nova_compute[230010]: 2025-11-24 10:12:37.825 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:12:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:12:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1011159927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.271 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.439 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.440 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4885MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.440 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.441 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.529 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.530 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:12:38 np0005533252 nova_compute[230010]: 2025-11-24 10:12:38.547 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:12:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:12:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739914990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.021 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.028 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.046 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.049 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.049 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:12:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.766 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:12:39 np0005533252 nova_compute[230010]: 2025-11-24 10:12:39.767 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 05:12:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:41 np0005533252 podman[254590]: 2025-11-24 10:12:41.30820167 +0000 UTC m=+0.049452213 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 05:12:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:41 np0005533252 nova_compute[230010]: 2025-11-24 10:12:41.721 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:42 np0005533252 nova_compute[230010]: 2025-11-24 10:12:42.042 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.308441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162308495, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 255, "total_data_size": 3762434, "memory_usage": 3808040, "flush_reason": "Manual Compaction"}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162322123, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2458052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39453, "largest_seqno": 40963, "table_properties": {"data_size": 2451627, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13693, "raw_average_key_size": 19, "raw_value_size": 2438570, "raw_average_value_size": 3539, "num_data_blocks": 156, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979037, "oldest_key_time": 1763979037, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 13726 microseconds, and 5391 cpu microseconds.
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.322173) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2458052 bytes OK
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.322196) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323880) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323893) EVENT_LOG_v1 {"time_micros": 1763979162323890, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3755342, prev total WAL file size 3755342, number of live WAL files 2.
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2400KB)], [75(11MB)]
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162324945, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15010669, "oldest_snapshot_seqno": -1}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6899 keys, 14849424 bytes, temperature: kUnknown
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162405366, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 14849424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14803871, "index_size": 27201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181377, "raw_average_key_size": 26, "raw_value_size": 14680182, "raw_average_value_size": 2127, "num_data_blocks": 1072, "num_entries": 6899, "num_filter_entries": 6899, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.405918) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14849424 bytes
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.410086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.1 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.0 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 7427, records dropped: 528 output_compression: NoCompression
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.410126) EVENT_LOG_v1 {"time_micros": 1763979162410108, "job": 46, "event": "compaction_finished", "compaction_time_micros": 80640, "compaction_time_cpu_micros": 28525, "output_level": 6, "num_output_files": 1, "total_output_size": 14849424, "num_input_records": 7427, "num_output_records": 6899, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162411551, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162416937, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:12:42.417136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:12:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:12:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:12:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:12:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:12:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:46 np0005533252 nova_compute[230010]: 2025-11-24 10:12:46.724 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:47 np0005533252 nova_compute[230010]: 2025-11-24 10:12:47.044 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:48.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:50.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:12:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:12:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:51.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:12:51 np0005533252 nova_compute[230010]: 2025-11-24 10:12:51.727 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:52 np0005533252 nova_compute[230010]: 2025-11-24 10:12:52.046 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:12:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:55 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:12:55 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:12:55 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:12:55 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:12:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:56 np0005533252 nova_compute[230010]: 2025-11-24 10:12:56.729 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:56.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:57 np0005533252 nova_compute[230010]: 2025-11-24 10:12:57.050 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:12:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:12:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:12:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:12:58 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:12:59 np0005533252 podman[254752]: 2025-11-24 10:12:59.150372166 +0000 UTC m=+0.067369113 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 05:12:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:12:59 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:12:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:12:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:12:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:12:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:13:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:13:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:01 np0005533252 nova_compute[230010]: 2025-11-24 10:13:01.730 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 05:13:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 05:13:01 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 05:13:01 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 05:13:02 np0005533252 nova_compute[230010]: 2025-11-24 10:13:02.052 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:02.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:05 np0005533252 podman[254778]: 2025-11-24 10:13:05.370545563 +0000 UTC m=+0.109703021 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 05:13:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:06 np0005533252 nova_compute[230010]: 2025-11-24 10:13:06.734 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:06.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:07 np0005533252 nova_compute[230010]: 2025-11-24 10:13:07.054 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:07.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:08.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:10.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:11 np0005533252 podman[254831]: 2025-11-24 10:13:11.491134847 +0000 UTC m=+0.077589643 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 05:13:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:11 np0005533252 nova_compute[230010]: 2025-11-24 10:13:11.735 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:12 np0005533252 nova_compute[230010]: 2025-11-24 10:13:12.056 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:12.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:13:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:13:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:13:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:13:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:16 np0005533252 nova_compute[230010]: 2025-11-24 10:13:16.738 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:16.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:17 np0005533252 nova_compute[230010]: 2025-11-24 10:13:17.058 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:13:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:13:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:18.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:13:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:13:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:13:20.081 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:13:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:21.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:21 np0005533252 nova_compute[230010]: 2025-11-24 10:13:21.741 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:22 np0005533252 nova_compute[230010]: 2025-11-24 10:13:22.059 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:23.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:26 np0005533252 nova_compute[230010]: 2025-11-24 10:13:26.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:26 np0005533252 nova_compute[230010]: 2025-11-24 10:13:26.778 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:26.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:27 np0005533252 nova_compute[230010]: 2025-11-24 10:13:27.061 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:28.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:29 np0005533252 podman[254860]: 2025-11-24 10:13:29.318053279 +0000 UTC m=+0.058117177 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 05:13:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:29.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:29 np0005533252 nova_compute[230010]: 2025-11-24 10:13:29.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:29 np0005533252 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:29 np0005533252 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:29 np0005533252 nova_compute[230010]: 2025-11-24 10:13:29.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:13:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:13:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:13:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:31 np0005533252 nova_compute[230010]: 2025-11-24 10:13:31.743 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:32 np0005533252 nova_compute[230010]: 2025-11-24 10:13:32.062 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:32 np0005533252 nova_compute[230010]: 2025-11-24 10:13:32.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:32.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:33 np0005533252 nova_compute[230010]: 2025-11-24 10:13:33.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:34 np0005533252 nova_compute[230010]: 2025-11-24 10:13:34.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:36 np0005533252 podman[254911]: 2025-11-24 10:13:36.219448346 +0000 UTC m=+0.115468192 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 05:13:36 np0005533252 nova_compute[230010]: 2025-11-24 10:13:36.745 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.063 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.785 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:13:37 np0005533252 nova_compute[230010]: 2025-11-24 10:13:37.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:13:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:13:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3050926301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.210 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.401 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.403 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4878MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.404 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.463 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.464 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.486 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:13:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:13:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4194534558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.977 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.981 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.998 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.999 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:13:38 np0005533252 nova_compute[230010]: 2025-11-24 10:13:38.999 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:13:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:13:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:40.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:13:41 np0005533252 nova_compute[230010]: 2025-11-24 10:13:41.001 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:13:41 np0005533252 nova_compute[230010]: 2025-11-24 10:13:41.001 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:13:41 np0005533252 nova_compute[230010]: 2025-11-24 10:13:41.002 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:13:41 np0005533252 nova_compute[230010]: 2025-11-24 10:13:41.017 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:13:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:41 np0005533252 nova_compute[230010]: 2025-11-24 10:13:41.747 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:42 np0005533252 nova_compute[230010]: 2025-11-24 10:13:42.065 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:42 np0005533252 podman[254984]: 2025-11-24 10:13:42.341702221 +0000 UTC m=+0.070231353 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 05:13:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:42.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:13:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:44.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:13:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:13:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:13:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:45.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:46 np0005533252 nova_compute[230010]: 2025-11-24 10:13:46.749 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:47 np0005533252 nova_compute[230010]: 2025-11-24 10:13:47.067 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:50.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:51 np0005533252 nova_compute[230010]: 2025-11-24 10:13:51.753 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:52 np0005533252 nova_compute[230010]: 2025-11-24 10:13:52.069 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:13:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:53.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:13:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:55.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:56 np0005533252 nova_compute[230010]: 2025-11-24 10:13:56.819 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:57 np0005533252 nova_compute[230010]: 2025-11-24 10:13:57.072 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:13:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:57.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:59 np0005533252 podman[255085]: 2025-11-24 10:13:59.411489078 +0000 UTC m=+0.053357579 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:13:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:13:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:13:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:59.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:13:59 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:14:00 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:14:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:14:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:14:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:01.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:01 np0005533252 nova_compute[230010]: 2025-11-24 10:14:01.821 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:02 np0005533252 nova_compute[230010]: 2025-11-24 10:14:02.074 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.853786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244853817, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1085, "num_deletes": 251, "total_data_size": 2420216, "memory_usage": 2461328, "flush_reason": "Manual Compaction"}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244867041, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1591187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40968, "largest_seqno": 42048, "table_properties": {"data_size": 1586358, "index_size": 2353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10689, "raw_average_key_size": 19, "raw_value_size": 1576631, "raw_average_value_size": 2925, "num_data_blocks": 103, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979162, "oldest_key_time": 1763979162, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13337 microseconds, and 4219 cpu microseconds.
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.867123) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1591187 bytes OK
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.867147) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868839) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868855) EVENT_LOG_v1 {"time_micros": 1763979244868850, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2414878, prev total WAL file size 2414878, number of live WAL files 2.
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.872037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1553KB)], [78(14MB)]
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244872145, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 16440611, "oldest_snapshot_seqno": -1}
Nov 24 05:14:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6922 keys, 14252441 bytes, temperature: kUnknown
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244938985, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 14252441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14207712, "index_size": 26313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 182554, "raw_average_key_size": 26, "raw_value_size": 14084537, "raw_average_value_size": 2034, "num_data_blocks": 1029, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976422, "oldest_key_time": 0, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "299c38d0-06ca-4074-b462-97cee3c14bc3", "db_session_id": "IKBI0BILOO7CZC90TSBP", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.939484) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14252441 bytes
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.941570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.4 rd, 212.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 14.2 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(19.3) write-amplify(9.0) OK, records in: 7438, records dropped: 516 output_compression: NoCompression
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.941601) EVENT_LOG_v1 {"time_micros": 1763979244941586, "job": 48, "event": "compaction_finished", "compaction_time_micros": 66992, "compaction_time_cpu_micros": 30754, "output_level": 6, "num_output_files": 1, "total_output_size": 14252441, "num_input_records": 7438, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244942636, "job": 48, "event": "table_file_deletion", "file_number": 80}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244948316, "job": 48, "event": "table_file_deletion", "file_number": 78}
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.871900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:04 np0005533252 ceph-mon[80009]: rocksdb: (Original Log Time 2025/11/24-10:14:04.948542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 05:14:05 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:14:05 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:14:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:06 np0005533252 podman[255166]: 2025-11-24 10:14:06.387268409 +0000 UTC m=+0.115139264 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 05:14:06 np0005533252 nova_compute[230010]: 2025-11-24 10:14:06.825 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:06.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:07 np0005533252 nova_compute[230010]: 2025-11-24 10:14:07.076 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:08.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:10.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:11.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:11 np0005533252 nova_compute[230010]: 2025-11-24 10:14:11.827 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:12 np0005533252 nova_compute[230010]: 2025-11-24 10:14:12.078 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:12 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:12 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:12.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:13 np0005533252 podman[255221]: 2025-11-24 10:14:13.370065973 +0000 UTC m=+0.097563543 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 05:14:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:14 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:14 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:14 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:14:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:14:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:16 np0005533252 nova_compute[230010]: 2025-11-24 10:14:16.829 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:16 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:16 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:16 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:16.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:17 np0005533252 nova_compute[230010]: 2025-11-24 10:14:17.079 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:18 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:18 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:18 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.082 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:14:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:14:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:14:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:14:20 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:20 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:20 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:20.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:21 np0005533252 nova_compute[230010]: 2025-11-24 10:14:21.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:22 np0005533252 nova_compute[230010]: 2025-11-24 10:14:22.080 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:22 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:22 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:22 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:24 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:24 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:24 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:24 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:26 np0005533252 nova_compute[230010]: 2025-11-24 10:14:26.831 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:26 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:26 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:26 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:27 np0005533252 nova_compute[230010]: 2025-11-24 10:14:27.082 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:14:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:14:28 np0005533252 nova_compute[230010]: 2025-11-24 10:14:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:28 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:28 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.003000071s ======
Nov 24 05:14:28 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Nov 24 05:14:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:29 np0005533252 nova_compute[230010]: 2025-11-24 10:14:29.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:30 np0005533252 podman[255250]: 2025-11-24 10:14:30.32560661 +0000 UTC m=+0.065002164 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 05:14:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:14:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:14:30 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:30 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:30 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:31 np0005533252 nova_compute[230010]: 2025-11-24 10:14:31.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:31 np0005533252 nova_compute[230010]: 2025-11-24 10:14:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:31 np0005533252 nova_compute[230010]: 2025-11-24 10:14:31.764 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:14:31 np0005533252 nova_compute[230010]: 2025-11-24 10:14:31.833 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:32 np0005533252 nova_compute[230010]: 2025-11-24 10:14:32.084 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:32 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:32 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:32 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:32.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:33 np0005533252 nova_compute[230010]: 2025-11-24 10:14:33.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:34 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:34 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:34 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:34.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:35 np0005533252 nova_compute[230010]: 2025-11-24 10:14:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:36 np0005533252 nova_compute[230010]: 2025-11-24 10:14:36.835 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:36 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:36 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:36 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:36.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:37 np0005533252 nova_compute[230010]: 2025-11-24 10:14:37.086 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:37 np0005533252 podman[255298]: 2025-11-24 10:14:37.359423705 +0000 UTC m=+0.094363385 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 05:14:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:38 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:38 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:38 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.784 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:14:39 np0005533252 nova_compute[230010]: 2025-11-24 10:14:39.785 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:14:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:14:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1493371527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.263 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.420 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.421 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4894MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.422 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.422 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.566 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.566 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.756 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing inventories for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.771 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating ProviderTree inventory for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.772 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Updating inventory in ProviderTree for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.785 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing aggregate associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.808 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Refreshing trait associations for resource provider 1b7b0f22-dba8-42a8-9de3-763c9152946e, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 05:14:40 np0005533252 nova_compute[230010]: 2025-11-24 10:14:40.824 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:14:40 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:40 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:14:40 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:40.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.300 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.306 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.319 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.321 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.321 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:14:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:41 np0005533252 nova_compute[230010]: 2025-11-24 10:14:41.889 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:42 np0005533252 nova_compute[230010]: 2025-11-24 10:14:42.088 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:42 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:42 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:42 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:43 np0005533252 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:14:43 np0005533252 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:14:43 np0005533252 nova_compute[230010]: 2025-11-24 10:14:43.322 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:14:43 np0005533252 nova_compute[230010]: 2025-11-24 10:14:43.338 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:14:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:43.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:44 np0005533252 podman[255372]: 2025-11-24 10:14:44.362465815 +0000 UTC m=+0.081696354 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 05:14:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:44 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:44 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:14:44 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:14:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:14:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:14:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:46 np0005533252 nova_compute[230010]: 2025-11-24 10:14:46.892 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:46 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:46 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:46 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:47 np0005533252 nova_compute[230010]: 2025-11-24 10:14:47.090 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:47 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:47 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:47 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:47.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:48 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:48 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:48 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:48.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:49 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:49 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:49 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:49 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:50 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:50 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:50 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:51 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:51 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:51 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:51.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:51 np0005533252 nova_compute[230010]: 2025-11-24 10:14:51.894 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:52 np0005533252 nova_compute[230010]: 2025-11-24 10:14:52.092 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:52 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:52 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:52 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:53 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:53 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:53 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:54 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:54 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:54 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:54 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:55 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:55 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:55 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:56 np0005533252 nova_compute[230010]: 2025-11-24 10:14:56.929 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:56 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:56 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:56 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:57 np0005533252 nova_compute[230010]: 2025-11-24 10:14:57.095 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:14:57 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:57 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:14:57 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:14:58 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:58 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:58 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:58.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:14:59 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:14:59 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:14:59 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:14:59 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:00 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:15:00 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:15:00 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:00 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:00 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:01 np0005533252 podman[255423]: 2025-11-24 10:15:01.323450736 +0000 UTC m=+0.057084611 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 05:15:01 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:01 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:01 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:01 np0005533252 nova_compute[230010]: 2025-11-24 10:15:01.995 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:02 np0005533252 nova_compute[230010]: 2025-11-24 10:15:02.096 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:02 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:02 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:02 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:03 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:03 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:03 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:04 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:04 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:04 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:04 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:05 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:05 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:05 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:05.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:06 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:06 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:06 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:07 np0005533252 nova_compute[230010]: 2025-11-24 10:15:07.098 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:07 np0005533252 nova_compute[230010]: 2025-11-24 10:15:07.601 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:07 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:07 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:07 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:08 np0005533252 podman[255529]: 2025-11-24 10:15:08.370265772 +0000 UTC m=+0.108340457 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 05:15:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:15:08 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:08 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:08 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:08 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [INF] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:15:09 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:15:09 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:09 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:09 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:09.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:10 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 05:15:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:10 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:10 np0005533252 ceph-mon[80009]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 05:15:10 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:10 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:15:10 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:15:11 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:11 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:11 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:11.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:12 np0005533252 nova_compute[230010]: 2025-11-24 10:15:12.103 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:12 np0005533252 nova_compute[230010]: 2025-11-24 10:15:12.603 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:12 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:13 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:13 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:13 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:13.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:14 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 05:15:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 05:15:15 np0005533252 podman[255586]: 2025-11-24 10:15:15.337858923 +0000 UTC m=+0.069901894 container health_status 6794d9d2f16f865643977633ea2bfec0506af6c4f15262c278b71a4c0754ea0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 05:15:15 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:15:15 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:15:15 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:15 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:15 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:15.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:16 np0005533252 ceph-mon[80009]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 05:15:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:17.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:17 np0005533252 nova_compute[230010]: 2025-11-24 10:15:17.108 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:17 np0005533252 nova_compute[230010]: 2025-11-24 10:15:17.605 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:17 np0005533252 systemd-logind[823]: New session 58 of user zuul.
Nov 24 05:15:17 np0005533252 systemd[1]: Started Session 58 of User zuul.
Nov 24 05:15:17 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:17 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:17 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:17.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:19 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:19 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:19 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:19 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:19.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.083 142336 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:15:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.085 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:15:20 np0005533252 ovn_metadata_agent[142331]: 2025-11-24 10:15:20.085 142336 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:15:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 05:15:21 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3403419153' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 05:15:21 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 05:15:21 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055159028' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 05:15:21 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:21 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:21 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:21.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:22 np0005533252 nova_compute[230010]: 2025-11-24 10:15:22.110 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:22 np0005533252 nova_compute[230010]: 2025-11-24 10:15:22.606 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:23 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:23 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:23 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:23.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:24 np0005533252 ovs-vsctl[255970]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 05:15:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:25.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:25 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 05:15:25 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 05:15:25 np0005533252 virtqemud[229578]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 05:15:25 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:25 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:25 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:25.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:27.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:27 np0005533252 nova_compute[230010]: 2025-11-24 10:15:27.112 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:27 np0005533252 nova_compute[230010]: 2025-11-24 10:15:27.608 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:27 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:27 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: cache status {prefix=cache status} (starting...)
Nov 24 05:15:27 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:27 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: client ls {prefix=client ls} (starting...)
Nov 24 05:15:27 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:27 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:27 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:27 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:27 np0005533252 lvm[256365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 05:15:27 np0005533252 lvm[256365]: VG ceph_vg0 finished
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:28 np0005533252 nova_compute[230010]: 2025-11-24 10:15:28.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:28 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 05:15:28 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1465121536' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 05:15:28 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3317630471' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: ops {prefix=ops} (starting...)
Nov 24 05:15:29 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2434305700' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 05:15:29 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2452454843' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 05:15:29 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:29 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:29 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049383097' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2514280768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 05:15:30 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: session ls {prefix=session ls} (starting...)
Nov 24 05:15:30 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk Can't run that command on an inactive MDS!
Nov 24 05:15:30 np0005533252 ceph-mds[85277]: mds.cephfs.compute-1.vpamdk asok_command: status {prefix=status} (starting...)
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 05:15:30 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079237173' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 05:15:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687570939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3221653421' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214528181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/549447527' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 05:15:31 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2964773270' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 05:15:31 np0005533252 nova_compute[230010]: 2025-11-24 10:15:31.759 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:31 np0005533252 nova_compute[230010]: 2025-11-24 10:15:31.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:31 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:31 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:31 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:31.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/315873122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/173568047' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 05:15:32 np0005533252 nova_compute[230010]: 2025-11-24 10:15:32.115 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:32 np0005533252 podman[257042]: 2025-11-24 10:15:32.213676694 +0000 UTC m=+0.076172438 container health_status 16798e42088c21440960ac0ba4f86339f9edab5c078277a1dcf4fb01c220329c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371992669' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 05:15:32 np0005533252 nova_compute[230010]: 2025-11-24 10:15:32.610 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4241696948' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 05:15:32 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4157263929' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 05:15:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 05:15:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 05:15:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 05:15:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3889485254' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 05:15:33 np0005533252 nova_compute[230010]: 2025-11-24 10:15:33.764 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:33 np0005533252 nova_compute[230010]: 2025-11-24 10:15:33.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:33 np0005533252 nova_compute[230010]: 2025-11-24 10:15:33.765 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 05:15:33 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:33 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:33 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:33.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:33 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 05:15:33 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268995238' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 458752 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634bef0c5a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634bf225e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 434176 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 425984 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957883 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.697479248s of 56.708225250s, submitted: 3
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 409600 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960907 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 393216 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 385024 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634bfe22d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 368640 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 360448 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960316 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.040000916s of 48.052692413s, submitted: 3
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959725 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 344064 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 327680 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634bef0cb40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 311296 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 294912 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79446016 unmapped: 286720 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 278528 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.977294922s of 56.983356476s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 262144 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c029ba40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 245760 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960646 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 229376 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.226074219s of 36.229869843s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 212992 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc000 session 0x5634bd20be00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963670 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 172032 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.313743591s of 10.320786476s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 163840 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf727000 session 0x5634c0304f00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963079 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823747635s of 10.826331139s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6909 writes, 27K keys, 6909 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6909 writes, 1355 syncs, 5.10 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 485 writes, 766 keys, 485 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 485 writes, 231 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5634bb9db350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964591 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365579605s of 10.368579865s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964000 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 147456 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bf0fc400 session 0x5634bd030780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964921 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.830619812s of 23.935253143s, submitted: 3
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 122880 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966433 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 106496 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000038s
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 90112 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 73728 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.325916290s of 47.341407776s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 49152 heap: 79732736 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,1])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1843200 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1671168 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1662976 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1654784 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1646592 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1638400 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 1630208 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965842 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.360378265s of 33.329280853s, submitted: 257
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1622016 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1613824 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106800 session 0x5634c0305c20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1597440 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965251 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1581056 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.545049667s of 30.567733765s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968275 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969787 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074189186s of 12.084068298s, submitted: 3
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 1564672 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 1556480 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1540096 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 1531904 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 1523712 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969196 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.175689697s of 27.178615570s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970708 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1515520 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be106400 session 0x5634c029b0e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969526 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.582208633s of 30.591884613s, submitted: 3
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971038 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 1507328 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1490944 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970447 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.724964142s of 39.735435486s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1474560 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1458176 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1449984 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1433600 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1417216 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634bee53000 session 0x5634c055af00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969856 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1400832 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.969955444s of 88.974121094s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1384448 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971368 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fc9ed000/0x0/0x4ffc00000, data 0x177815/0x22f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1368064 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 ms_handle_reset con 0x5634be107400 session 0x5634c06b1680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1343488 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.761932373s of 24.764957428s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 294912 heap: 81829888 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975134 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc9e9000/0x0/0x4ffc00000, data 0x179901/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 16998400 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc1e4000/0x0/0x4ffc00000, data 0x97ba51/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 153 ms_handle_reset con 0x5634be106400 session 0x5634bf7bed20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 16867328 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 16834560 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 154 ms_handle_reset con 0x5634be106800 session 0x5634c06e63c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077664 data_alloc: 218103808 data_used: 151552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 16793600 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.009191513s of 16.191659927s, submitted: 44
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077816 data_alloc: 218103808 data_used: 155648
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 16777216 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c0304b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf539c00 session 0x5634bf53bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106400 session 0x5634c06b2780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 16785408 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be106800 session 0x5634bfe22000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634be107400 session 0x5634c06b0960
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 5079040 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 ms_handle_reset con 0x5634bf727000 session 0x5634c06b01e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108216 data_alloc: 234881024 data_used: 11628544
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 5087232 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fbd68000/0x0/0x4ffc00000, data 0xdf1c79/0xeb1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.016056061s of 35.019523621s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 5062656 heap: 98615296 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfdc25a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634bf034400 session 0x5634bfd5ed20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106400 session 0x5634bf53ba40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be106800 session 0x5634bf08a3c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 ms_handle_reset con 0x5634be107400 session 0x5634c06cd860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151181 data_alloc: 234881024 data_used: 11628544
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95297536 unmapped: 5488640 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f3000/0x0/0x4ffc00000, data 0x1264eb8/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06e6000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 5480448 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 5472256 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172443 data_alloc: 234881024 data_used: 14356480
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 2924544 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184603 data_alloc: 234881024 data_used: 16195584
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184755 data_alloc: 234881024 data_used: 16199680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 99491840 unmapped: 1294336 heap: 100786176 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fb8f1000/0x0/0x4ffc00000, data 0x1266e8a/0x132a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.278177261s of 20.407505035s, submitted: 45
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 2703360 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263495 data_alloc: 234881024 data_used: 18071552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0x19e5e8a/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 1523712 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107077632 unmapped: 3145728 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256551 data_alloc: 234881024 data_used: 18071552
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3137536 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fd0000/0x0/0x4ffc00000, data 0x19e8e8a/0x1aac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257615 data_alloc: 234881024 data_used: 18145280
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.662779808s of 14.812747955s, submitted: 82
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257839 data_alloc: 234881024 data_used: 18145280
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9fcf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 3096576 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 3088384 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2f00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfcb32c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 3104768 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b23c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688000 session 0x5634bf53ab40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257991 data_alloc: 234881024 data_used: 18673664
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 1818624 heap: 110223360 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634be148780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993793488s of 10.001093864s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfceb40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcbf9c20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd20a780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688400 session 0x5634be1adc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfd5f0e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f977a000/0x0/0x4ffc00000, data 0x1e2ee8a/0x1ef2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c41e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 8273920 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bd4cfe00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298504 data_alloc: 234881024 data_used: 18673664
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 8290304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7be1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688800 session 0x5634bf4421e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 8970240 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 8708096 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 6242304 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323629 data_alloc: 234881024 data_used: 21921792
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110419968 unmapped: 6234112 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9779000/0x0/0x4ffc00000, data 0x1e2ee99/0x1ef3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 6201344 heap: 116654080 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.279289246s of 18.403636932s, submitted: 31
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 4358144 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389915 data_alloc: 234881024 data_used: 22224896
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f55000/0x0/0x4ffc00000, data 0x264ae99/0x270f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1,1])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 4030464 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3948544 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f4a000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3940352 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390001 data_alloc: 234881024 data_used: 22290432
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f52000/0x0/0x4ffc00000, data 0x2654e99/0x2719000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 5496832 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634c02cbe00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 7290880 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18c780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270311 data_alloc: 234881024 data_used: 18673664
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x19e9e8a/0x1aad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 7528448 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.905948639s of 14.233164787s, submitted: 109
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b3e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be106400 session 0x5634bcfe1c20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 7520256 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bfe2f680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 10739712 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 11157504 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136979 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106086400 unmapped: 11894784 heap: 117981184 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf443860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf443a40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf442d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462f00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.204719543s of 34.300907135s, submitted: 31
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd462b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107069440 unmapped: 23027712 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bcfce5a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06d7e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634bf7c0d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225465 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634be0fb800 session 0x5634bf7c1680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c0000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 23306240 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c03c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 23289856 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 23683072 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 18817024 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289986 data_alloc: 234881024 data_used: 21778432
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 18808832 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9d28000/0x0/0x4ffc00000, data 0x187eefc/0x1944000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 18800640 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.452789307s of 16.602790833s, submitted: 56
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 13295616 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373192 data_alloc: 234881024 data_used: 22908928
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 10567680 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f949c000/0x0/0x4ffc00000, data 0x2101efc/0x21c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1373648 data_alloc: 234881024 data_used: 22921216
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 10559488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367704 data_alloc: 234881024 data_used: 22933504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117694464 unmapped: 12402688 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.744583130s of 15.027527809s, submitted: 96
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9484000/0x0/0x4ffc00000, data 0x2122efc/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 12394496 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x2128efc/0x21ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367784 data_alloc: 234881024 data_used: 22933504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 12386304 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369392 data_alloc: 234881024 data_used: 23019520
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 12369920 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f947b000/0x0/0x4ffc00000, data 0x212befc/0x21f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042746544s of 13.056042671s, submitted: 4
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 12345344 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf1da1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bd4cef00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf4c7680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b0000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149888 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 20987904 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.141063690s of 20.330921173s, submitted: 61
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b2b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b23c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf727000 session 0x5634c06b2d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c06b3680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bcfcfe00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157691 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf7be5a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 21381120 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 21372928 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8425 writes, 31K keys, 8425 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8425 writes, 2022 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1516 writes, 4428 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s#012Interval WAL: 1516 writes, 667 syncs, 2.27 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161623 data_alloc: 234881024 data_used: 12693504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 21364736 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa72f000/0x0/0x4ffc00000, data 0xe79e8a/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.318338394s of 17.361791611s, submitted: 13
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109363200 unmapped: 20733952 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213389 data_alloc: 234881024 data_used: 12693504
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 21151744 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 21094400 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 20750336 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x1557e8a/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220401 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 20766720 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 20758528 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218617 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.672043800s of 14.786133766s, submitted: 55
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x155ae8a/0x161e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 20824064 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218857 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 20815872 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf08a780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08ab40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf08bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08ba40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53b4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0688c00 session 0x5634bf53ab40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf53a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53ad20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9bc9000/0x0/0x4ffc00000, data 0x19dee9a/0x1aa3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53af00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253835 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634bf53bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 20799488 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3c00 session 0x5634bf53ba40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.216509819s of 12.251233101s, submitted: 6
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634c06b0f00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109617152 unmapped: 20480000 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 20471808 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 18989056 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291410 data_alloc: 234881024 data_used: 17616896
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba4000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291578 data_alloc: 234881024 data_used: 17616896
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 18399232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.640722275s of 12.708586693s, submitted: 10
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9ba3000/0x0/0x4ffc00000, data 0x1a02eaa/0x1ac8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 18825216 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298216 data_alloc: 234881024 data_used: 17735680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300012 data_alloc: 234881024 data_used: 17735680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9b5e000/0x0/0x4ffc00000, data 0x1a48eaa/0x1b0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 18898944 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfe221e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109428736 unmapped: 20668416 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c029ad20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224813 data_alloc: 234881024 data_used: 12890112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.472726822s of 12.585700035s, submitted: 33
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634c06b0b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634bf7c1680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 20660224 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bd20b2c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa04d000/0x0/0x4ffc00000, data 0x155be8a/0x161f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 21446656 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 22536192 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.771172523s of 14.867496490s, submitted: 30
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 22495232 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 21422080 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 21192704 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109068288 unmapped: 21028864 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162905 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 21020672 heap: 130097152 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08b2c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf7c05a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bd463e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c06b3680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bef0d680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 24567808 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 24559616 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 24535040 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe34a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 24518656 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212735 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 24141824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 21692416 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 21659648 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 21626880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257575 data_alloc: 234881024 data_used: 18759680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21618688 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa14c000/0x0/0x4ffc00000, data 0x145ce8a/0x1520000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.294692993s of 25.236562729s, submitted: 260
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 14884864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 12951552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 12918784 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 12877824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329075 data_alloc: 234881024 data_used: 20541440
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121454592 unmapped: 12845056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd23a800 session 0x5634bcfe05a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 12836864 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 12828672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329379 data_alloc: 234881024 data_used: 20549632
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 12820480 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330139 data_alloc: 234881024 data_used: 20570112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 12804096 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.572525024s of 26.743309021s, submitted: 82
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9abe000/0x0/0x4ffc00000, data 0x1adce8a/0x1ba0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6960
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3000 session 0x5634c06b05a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634beabcc00 session 0x5634bf08a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfcf860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfe23c20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350053 data_alloc: 234881024 data_used: 20570112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972c000/0x0/0x4ffc00000, data 0x1e7ce8a/0x1f40000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 15040512 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634bfd5e780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350693 data_alloc: 234881024 data_used: 20570112
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 15032320 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 14876672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 12206080 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376837 data_alloc: 234881024 data_used: 24346624
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f972a000/0x0/0x4ffc00000, data 0x1e7de8a/0x1f41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 12140544 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.929925919s of 18.974597931s, submitted: 10
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125714432 unmapped: 8585216 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9393000/0x0/0x4ffc00000, data 0x220de8a/0x22d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412031 data_alloc: 234881024 data_used: 24842240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 9084928 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 9052160 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 9043968 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412047 data_alloc: 234881024 data_used: 24842240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 9035776 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.664098740s of 16.812852859s, submitted: 44
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125419520 unmapped: 8880128 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 8871936 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125435904 unmapped: 8863744 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413159 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 8855552 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 8847360 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.351060867s of 15.361025810s, submitted: 14
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411143 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125509632 unmapped: 8790016 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 8781824 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 8773632 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: mgrc ms_handle_reset ms_handle_reset con 0x5634bddcfc00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: mgrc handle_mgr_configure stats_period=5
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bf4570e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538800 session 0x5634bfa2bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 8691712 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411311 data_alloc: 234881024 data_used: 24825856
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 8683520 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634c06e6d20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf225680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 8675328 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.096628189s of 21.114942551s, submitted: 5
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9390000/0x0/0x4ffc00000, data 0x2218e8a/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82ed20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327619 data_alloc: 234881024 data_used: 20619264
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 11395072 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfe32c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689c00 session 0x5634be1ad680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd20b680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9acb000/0x0/0x4ffc00000, data 0x1adde8a/0x1ba1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176425 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 16629760 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.783638000s of 33.895526886s, submitted: 33
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b2780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7c4b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7c41e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf7c52c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634c02e0b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 05:15:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1486480020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 17965056 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e01e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7be960
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220693 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bf7bfc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf7bf4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 17948672 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1390e8a/0x1454000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260866 data_alloc: 234881024 data_used: 18022400
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 17358848 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf82eb40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.973909378s of 12.041707993s, submitted: 15
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82fc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf7c52c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181186 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 19578880 heap: 134299648 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53a000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689800 session 0x5634bcfcf860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf08a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06b05a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.199029922s of 17.423206329s, submitted: 27
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bfcb32c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bcfe14a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e72c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3800 session 0x5634c06e6f00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029a5a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226040 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 23748608 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf18de00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bdc7c780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 23732224 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c23a3400 session 0x5634bf444000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e1860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0x13c9e9a/0x148e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 23724032 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227854 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 23715840 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c9eaa/0x148f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 22036480 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf82f0e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf53bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.103581429s of 11.147413254s, submitted: 8
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7c52c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186069 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bf7bf4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e400 session 0x5634bfa2a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bfa2bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bfa2a3c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 25255936 heap: 138502144 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.712280273s of 27.752235413s, submitted: 13
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 29245440 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfa2a000
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfa26c00 session 0x5634bf2252c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf224960
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf08a1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bf7bef00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228500 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 29564928 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.442607880s of 10.648617744s, submitted: 19
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bdc7c780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa296000/0x0/0x4ffc00000, data 0x1311e9a/0x13d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 29261824 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 29204480 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 28254208 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259524 data_alloc: 234881024 data_used: 16396288
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 27893760 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa272000/0x0/0x4ffc00000, data 0x1335e9a/0x13fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 27885568 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.894562721s of 12.903012276s, submitted: 2
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261912 data_alloc: 234881024 data_used: 16449536
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 27590656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 27582464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274846 data_alloc: 234881024 data_used: 16560128
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa218000/0x0/0x4ffc00000, data 0x1380e9a/0x1445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 25968640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.734338760s of 14.816822052s, submitted: 27
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634c02e01e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bcfe1a40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270054 data_alloc: 234881024 data_used: 16560128
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf08be00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa7b1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193664 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 28024832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.250144958s of 21.353006363s, submitted: 31
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bcfe2b40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf1da5a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf726800 session 0x5634bfcb30e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf82eb40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c029af00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237405 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 27615232 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfa2ab40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fee8a/0x13c2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 27533312 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 27664384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26927104 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269042 data_alloc: 234881024 data_used: 16138240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115785728 unmapped: 26918912 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bf445a40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea400 session 0x5634bf7bfe00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd4614a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bf029e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.969479561s of 16.080394745s, submitted: 32
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfe225a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634c06cd680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1322e8a/0x13e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea800 session 0x5634bf08a960
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c02e14a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634c06e65a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303056 data_alloc: 234881024 data_used: 16138240
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 26763264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 21438464 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af9000/0x0/0x4ffc00000, data 0x1aa6e9a/0x1b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22560768 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bfcb25a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336043 data_alloc: 234881024 data_used: 17305600
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120160256 unmapped: 22544384 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 22536192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121495552 unmapped: 21209088 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.558697701s of 10.774977684s, submitted: 67
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365375 data_alloc: 234881024 data_used: 21561344
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9af7000/0x0/0x4ffc00000, data 0x1ab0e9a/0x1b75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 20750336 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.470090866s of 10.473713875s, submitted: 1
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 19791872 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 19660800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374765 data_alloc: 234881024 data_used: 21581824
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 18587648 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382269 data_alloc: 234881024 data_used: 21577728
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 18554880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x1b90e9a/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124182528 unmapped: 18522112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08ea000 session 0x5634bcbf9e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.591684341s of 11.705703735s, submitted: 37
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634c06b03c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377601 data_alloc: 234881024 data_used: 21577728
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 20422656 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bf7be1e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9f1c000/0x0/0x4ffc00000, data 0x168ce8a/0x1750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122314752 unmapped: 20389888 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bf4450e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634c06e72c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 23543808 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf2245a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 23560192 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf038400 session 0x5634bcfce780
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bd463e00
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd4632c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207973 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bf443c20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.324642181s of 25.657859802s, submitted: 76
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [0,0,0,0,0,2])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 17776640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634be1ada40
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf7c10e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf53a3c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634bf53b4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bd031680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285564 data_alloc: 234881024 data_used: 12169216
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b680
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20bc20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c08eac00 session 0x5634bd20b860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf032800 session 0x5634c029b2c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 23994368 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287325 data_alloc: 234881024 data_used: 12169216
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23977984 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x1786e9a/0x184b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353141 data_alloc: 234881024 data_used: 21815296
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 20267008 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.633270264s of 20.794521332s, submitted: 35
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 13688832 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 13852672 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8f68000/0x0/0x4ffc00000, data 0x2227e9a/0x22ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 128991232 unmapped: 13713408 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8ebf000/0x0/0x4ffc00000, data 0x22d8e9a/0x239d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457277 data_alloc: 234881024 data_used: 22888448
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 13680640 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 13672448 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454133 data_alloc: 234881024 data_used: 22888448
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 13598720 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 13590528 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.613185883s of 11.909707069s, submitted: 129
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e9b000/0x0/0x4ffc00000, data 0x22fce9a/0x23c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453885 data_alloc: 234881024 data_used: 22888448
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f8e94000/0x0/0x4ffc00000, data 0x2303e9a/0x23c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0a800 session 0x5634bf82f0e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bfc0b800 session 0x5634bfcb3860
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 13516800 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c025e000 session 0x5634bd20b2c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 20619264 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 20611072 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 20602880 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 20594688 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 20586496 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 20561920 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 20553728 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 20570112 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 20537344 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 20865024 heap: 142704640 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 20824064 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121241600 unmapped: 32505856 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 32464896 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 32464896 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 32456704 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 234881024 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 32448512 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 32440320 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 32432128 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2914 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2050 writes, 6533 keys, 2050 commit groups, 1.0 writes per commit group, ingest: 7.27 MB, 0.01 MB/s#012Interval WAL: 2050 writes, 892 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 32423936 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121331712 unmapped: 32415744 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 32407552 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 32399360 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 32391168 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 32382976 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 206.406784058s of 206.570480347s, submitted: 57
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 32374784 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 32235520 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf0fc400 session 0x5634c06e61e0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121782272 unmapped: 31965184 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 31956992 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121798656 unmapped: 31948800 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121806848 unmapped: 31940608 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 31932416 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121823232 unmapped: 31924224 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 31916032 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121839616 unmapped: 31907840 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 31899648 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121856000 unmapped: 31891456 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31883264 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31866880 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 31858688 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 31850496 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 31842304 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 31834112 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 31825920 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 31817728 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 31809536 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 31801344 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121954304 unmapped: 31793152 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 31784960 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 31776768 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 31768576 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 31760384 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 31744000 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122011648 unmapped: 31735808 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 31719424 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 31711232 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122044416 unmapped: 31703040 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31694848 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 31686656 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634c0689000 session 0x5634c055b4a0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122068992 unmapped: 31678464 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 31670272 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 31662080 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31653888 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 31645696 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122109952 unmapped: 31637504 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 31629312 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31612928 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bf538400 session 0x5634bd20ad20
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 ms_handle_reset con 0x5634bd1e9000 session 0x5634bcfe03c0
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 31604736 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 31596544 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 31596544 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31588352 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31580160 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 31571968 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 31571968 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31555584 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122200064 unmapped: 31547392 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225505 data_alloc: 218103808 data_used: 12165120
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122216448 unmapped: 31531008 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 31563776 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 31203328 heap: 153747456 old mem: 2845415833 new mem: 2845415833
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fa3a1000/0x0/0x4ffc00000, data 0xdf7e8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Nov 24 05:15:34 np0005533252 ceph-osd[77497]: do_command 'log dump' '{prefix=log dump}'
Nov 24 05:15:34 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 05:15:34 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1519524842' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 05:15:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:35 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 05:15:35 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2192234632' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 05:15:35 np0005533252 nova_compute[230010]: 2025-11-24 10:15:35.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:35 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:35 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:35 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:35.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 05:15:36 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4072984199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 05:15:36 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 05:15:36 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2470601515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 05:15:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:37 np0005533252 nova_compute[230010]: 2025-11-24 10:15:37.117 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:37 np0005533252 nova_compute[230010]: 2025-11-24 10:15:37.613 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 05:15:37 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3419392445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 05:15:37 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:37 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:37 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:37.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:37 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 05:15:37 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3340040457' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 05:15:37 np0005533252 systemd[1]: Starting Hostname Service...
Nov 24 05:15:38 np0005533252 systemd[1]: Started Hostname Service.
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946831100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982131563' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3599221387' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 05:15:38 np0005533252 nova_compute[230010]: 2025-11-24 10:15:38.760 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 24 05:15:38 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3983569009' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 05:15:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/659856770' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371967134' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 05:15:39 np0005533252 podman[258127]: 2025-11-24 10:15:39.405017003 +0000 UTC m=+0.125263363 container health_status c4a7fece2a8abbd773f184380ebf0443df5298e1c3e7a580495db5c46749acd2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1915736469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/305860648' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.765 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.793 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 05:15:39 np0005533252 nova_compute[230010]: 2025-11-24 10:15:39.794 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:15:39 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:39 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:39 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1378070210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 05:15:39 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2716061122' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/103572047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.279 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1315247719' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4202446663' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.475 230014 WARNING nova.virt.libvirt.driver [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.476 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4547MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.476 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.477 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.546 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.551 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 05:15:40 np0005533252 nova_compute[230010]: 2025-11-24 10:15:40.571 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 05:15:40 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1938538434' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 05:15:41 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 05:15:41 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1159984311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 05:15:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:41 np0005533252 nova_compute[230010]: 2025-11-24 10:15:41.061 230014 DEBUG oslo_concurrency.processutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 05:15:41 np0005533252 nova_compute[230010]: 2025-11-24 10:15:41.069 230014 DEBUG nova.compute.provider_tree [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 1b7b0f22-dba8-42a8-9de3-763c9152946e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 05:15:41 np0005533252 nova_compute[230010]: 2025-11-24 10:15:41.084 230014 DEBUG nova.scheduler.client.report [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Inventory has not changed for provider 1b7b0f22-dba8-42a8-9de3-763c9152946e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 05:15:41 np0005533252 nova_compute[230010]: 2025-11-24 10:15:41.087 230014 DEBUG nova.compute.resource_tracker [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 05:15:41 np0005533252 nova_compute[230010]: 2025-11-24 10:15:41.088 230014 DEBUG oslo_concurrency.lockutils [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 05:15:41 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:41 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:41 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:41.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:42 np0005533252 nova_compute[230010]: 2025-11-24 10:15:42.119 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:42 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:15:42 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:15:42 np0005533252 nova_compute[230010]: 2025-11-24 10:15:42.614 230014 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 24 05:15:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 05:15:42 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 05:15:42 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2091781462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 05:15:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:43 np0005533252 nova_compute[230010]: 2025-11-24 10:15:43.088 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:43 np0005533252 nova_compute[230010]: 2025-11-24 10:15:43.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 05:15:43 np0005533252 nova_compute[230010]: 2025-11-24 10:15:43.089 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 05:15:43 np0005533252 nova_compute[230010]: 2025-11-24 10:15:43.105 230014 DEBUG nova.compute.manager [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 05:15:43 np0005533252 nova_compute[230010]: 2025-11-24 10:15:43.106 230014 DEBUG oslo_service.periodic_task [None req-c9e58c13-4f64-4399-9731-28c829d2a5d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100271252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 05:15:43 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2412318721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 05:15:43 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:43 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 05:15:43 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:43.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 05:15:44 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 05:15:44 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1546744390' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 05:15:44 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 05:15:44 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 05:15:45 np0005533252 radosgw[81417]: ====== starting new request req=0x7fa9789055d0 =====
Nov 24 05:15:45 np0005533252 radosgw[81417]: ====== req done req=0x7fa9789055d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 05:15:45 np0005533252 radosgw[81417]: beast: 0x7fa9789055d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:45.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 05:15:45 np0005533252 ceph-mon[80009]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 05:15:45 np0005533252 ceph-mon[80009]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739736414' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
